SRI International is searching for a Computer Scientist in Menlo Park, CA. According to the post, this position will “Conduct research into fundamental computer and information science developing techniques for knowledge base optimization. Will perform research to re-design and reengineer critical pieces of the system and develop new applications. Will perform research and design and implement a query language for knowledge base systems; reasoning algorithms for inference tasks including similarity reasoning, relationship reasoning and para-consistent reasoning; and knowledge modeling software. Optimize the performance of reasoning algorithms and evaluate relative advantages of open source and commercial reasoning systems.” Read more
Sramana Mitra of Wired recently wrote, “Back in 2007, even before the iPhone was launched, giving us a powerful computer in our pockets or handbags, I started outlining a vision for Web 3.0. Tim Berners-Lee, a father of the World Wide Web, talks about the ‘Semantic Web,’ a way that computers employ the meaning of words — not just pattern matching — along with logical rules to connect independent nuggets of data and so create more context for information. The formula that makes the most sense to me is this: Web 3.0 results from combining content, commerce, community and context, with personalization and vertical search. Or, to put it in a handy phrase: Web 3.0 = (4C + P + VS).” Read more
Jack Flanagan of Real Business reports, “The future of the web is semantic – at least according to French tech startup Sépage, which specialises in semantic technologies for travel websites. However the little known, little understood technology is still crossing the distance between science and business. Real Business sought comment from Sépage on what this is, and how they’ve built it. Sepage told Real Business, “We believe the potential is immense. Most of today’s digital marketing approaches aren’t actually personalised, even though that’s what they claim ; comparing your basket to thousands of others and cluster you in groups of ‘similar individuals’ can’t really be called personalisation.” Read more
Oxford University Press needs a data engineer. The job description states:
“To design and implement XML data structures, devise XML strategies, and through expert content analysis, facilitate the electronic and print production, publication, and licensing of OUP’s books, journals and dictionaries content.
• Analyze OUP and licensed texts, and the requirements for their production, online functionality and presentation, and general electronic use, to:
• Determine appropriate data structures to support those requirements (content and metadata),
• Devise quality control rules and processes,
• Design tools and workflows required by staff in Editorial and Production to produce, capture, and enhance the data.
• Write transformation scripts to convert source data into OUP standard data structures that meets the agreed data quality standards for publishing and licensing.” Read more
Alexandre Passant, founder of seevl, which we have covered before, has hacked together a cool proof of concept. He describes the project as using “Twitter As A Service,” and it leverages Twitter, YouTube, and the seevl API. As Passant describes, “The result is a twitter bot, running under our @seevl handle, which accepts a few (controlled) natural-language queries and replies with an appropriate track, embedded in a Tweet via a YouTube card.”
He continues, “As it’s all Twitter-based, not only you can send messages, but you can have a conversation with your virtual DJ.”
The Center for Urban Science and Progress at New York University is seeking a data curator. Requirements include: “3-5 years experience in a related field, such as applied science, metadata schema design and management, taxonomy management, or equivalent education and experience; relevant experience in research data management as a researcher, research data manager, research data repository manager , or in similar roles. Demonstrated experience in consulting with faculty or researchers regarding technology or metadata creation; Demonstrated understanding of the research and data lifecycle. Demonstrated experience in curating and handling large data sets with particular understanding of requirements for longer term digital archiving. Understanding of database systems, XML, RDF, scientific metadata standards, API development, and related technologies; as well as protocols such as OAI-PMH. Experience with ontologies and metadata issues related to the discovery of academic or data resources. Experience constructing and maintaining a controlled vocabulary. ” Read more
- In a sample of over 12 billion web pages, 21 percent, or 2.5 billion pages, use it to mark up HTML pages, to the tune of more than 15 billion entities and more than 65 billion triples;
- In that same sample, this works out to six entities and 26 facts per page with schema.org;
- Just about every major site in every major category, from news to e-commerce (with the exception of Amazon.com), uses it;
- Its ontology counts some 800 properties and 600 classes.
A lot of it has to do with the focus its proponents have had since the beginning on making it very easy for webmasters and developers to adopt and leverage the collection of shared vocabularies for page markup. At this August’s 10th annual Semantic Technology & Business conference in San Jose, Google Fellow Ramanathan V. Guha, one of the founders of schema.org, shared the progress of the initiative to develop one vocabulary that would be understood by all search engines and how it got to where it is today.
Jamie Bisker reported, “Most consumers and professionals of all types have a basic feeling about technological innovation as something positive. It is true that we may bemoan the loss of a favorite aspect of the past, and tend to recall for the most part only favorable situations that strengthen such memories. But, in general, people feel upbeat about the convenience and capabilities that technology can provide. We evoke pleasant feelings from the past and that is our nature. It is also our nature, at a very deep biological level, to anticipate the future.
Jeff Hawkins, in his book On Intelligence, highlights research he both collected and directs about the physiological aspect of how neurons in the brain are connected. He has shown that prediction is basically wired in to a large portion of neural circuitry. Hawkins named this approach as a “memory-prediction framework.” He also makes the case for prediction as being one of the foundations of human intelligence.”
A recent press release states, “Transforming our cities into the Smart Cities of the future will encompass incorporating technologies and key digital developments all linked by machine-to-machine (M2M) solutions and real-time data analytics which sit under the umbrella term of the Internet of Things. Smart cities however must be underpinned by the appropriate ICT infrastructure based on fibre optic and high-speed wireless technologies, which is well underway in many developed cities around the world. This infrastructure allows for the development of smart communities; supporting connected homes; intelligent transport systems; e-health; e-government and e-education; smart grids and smart energy solutions – just to name a few of the exciting solutions smart cities will incorporate. Many of the technological advancements emerging around the world today can, and will be, applied to smart cities. Artificial Intelligence; Electric Vehicles; Autonomous Vehicles; Mobile applications; Drones; Wearable and Smart devices and so on are just some of the key developments to watch.” Read more
NEXT PAGE >>