Sig.ma - Uncovering & Maximizing The Web Of Data

ig.ma is self-described as “a data aggregator for the semantic web”. It aims to leave its mark by providing users with large scale semantic indexing, data aggregation heuristics, pragmatic ontology alignments and logic reasoning. By combining all these features, the user can come up with entity descriptions which eventually become embeddable data summaries that can be used when tweeting, blogging or just anywhere else.

What good is this for? Rather, is there a way this can be explained to the average user (IE, you and I) in a light that does not leave us scratching our chins? Yes. Basically, if used correctly this can lead to richer and more precise information than the one procured by methods relying simply on web text analysis. That is, data is searched and found by employing precise attribute value searches and not mere strings of text. The features listed above imply that sig.ma learns from user behavior in a way that means if a piece is deleted it will not just disappears from Sig.ma but also never crop up again.

When fully developed, this system could mean that structured information can be both retrieved and published by the public at large. It is a bit difficult to convey the every nuance of Sig.ma in such a short space, and I advice you to stop by the site in order to read the articles and also watch the provided tutorials and introductory videos.

Publication Date

July 29th 2009

Source

www.killerstartups.com