The Center for Urban Science and Progress at New York University is seeking a data curator. Requirements include: “3-5 years experience in a related field, such as applied science, metadata schema design and management, taxonomy management, or equivalent education and experience; relevant experience in research data management as a researcher, research data manager, research data repository manager , or in similar roles. Demonstrated experience in consulting with faculty or researchers regarding technology or metadata creation; Demonstrated understanding of the research and data lifecycle. Demonstrated experience in curating and handling large data sets with particular understanding of requirements for longer term digital archiving. Understanding of database systems, XML, RDF, scientific metadata standards, API development, and related technologies; as well as protocols such as OAI-PMH. Experience with ontologies and metadata issues related to the discovery of academic or data resources. Experience constructing and maintaining a controlled vocabulary. ” Read more
- In a sample of over 12 billion web pages, 21 percent, or 2.5 billion pages, use it to mark up HTML pages, to the tune of more than 15 billion entities and more than 65 billion triples;
- In that same sample, this works out to six entities and 26 facts per page with schema.org;
- Just about every major site in every major category, from news to e-commerce (with the exception of Amazon.com), uses it;
- Its ontology counts some 800 properties and 600 classes.
A lot of it has to do with the focus its proponents have had since the beginning on making it very easy for webmasters and developers to adopt and leverage the collection of shared vocabularies for page markup. At this August’s 10th annual Semantic Technology & Business conference in San Jose, Google Fellow Ramanathan V. Guha, one of the founders of schema.org, shared the progress of the initiative to develop one vocabulary that would be understood by all search engines and how it got to where it is today.
Jamie Bisker reported, “Most consumers and professionals of all types have a basic feeling about technological innovation as something positive. It is true that we may bemoan the loss of a favorite aspect of the past, and tend to recall for the most part only favorable situations that strengthen such memories. But, in general, people feel upbeat about the convenience and capabilities that technology can provide. We evoke pleasant feelings from the past and that is our nature. It is also our nature, at a very deep biological level, to anticipate the future.
Jeff Hawkins, in his book On Intelligence, highlights research he both collected and directs about the physiological aspect of how neurons in the brain are connected. He has shown that prediction is basically wired in to a large portion of neural circuitry. Hawkins named this approach as a “memory-prediction framework.” He also makes the case for prediction as being one of the foundations of human intelligence.”
A recent press release states, “Transforming our cities into the Smart Cities of the future will encompass incorporating technologies and key digital developments all linked by machine-to-machine (M2M) solutions and real-time data analytics which sit under the umbrella term of the Internet of Things. Smart cities however must be underpinned by the appropriate ICT infrastructure based on fibre optic and high-speed wireless technologies, which is well underway in many developed cities around the world. This infrastructure allows for the development of smart communities; supporting connected homes; intelligent transport systems; e-health; e-government and e-education; smart grids and smart energy solutions – just to name a few of the exciting solutions smart cities will incorporate. Many of the technological advancements emerging around the world today can, and will be, applied to smart cities. Artificial Intelligence; Electric Vehicles; Autonomous Vehicles; Mobile applications; Drones; Wearable and Smart devices and so on are just some of the key developments to watch.” Read more
Asymmetrik seeks a lead software engineer. Responsibilities include: “The Lead Software Engineer will develop and maintain mission-critical information extraction, analysis, and management systems; implement analysis algorithms to evaluate, correlate, and display information; and provide direct and responsive support for urgent analytic needs. In addition, the Lead Software Engineer will participate in architecture and software development activities that may include:
Aaron Bradley recently posted a roundtable discussion about JSON-LD which includes: “JSON-LD is everywhere. Okay, perhaps not everywhere, but JSON-LD loomed large at the 2014 Semantic Web Technology and Business Conference in San Jose, where it was on many speakers’ lips, and could be seen in the code examples of many presentations. I’ve read much about the format – and have even provided a thumbnail definition of JSON-LD in these pages – but I wanted to take advantage of the conference to learn more about JSON-LD, and to better understand why this very recently-developed standard has been such a runaway hit with developers. In this quest I could not have been more fortunate than to sit down with Gregg Kellogg, one of the editors of the W3C Recommendation for JSON-LD, to learn more about the format, its promise as a developmental tool, and – particularly important to me as a search marketer – the role in the evolution of schema.org.”
Big Data has been getting its fair share of commentary over the last couple months. Surveys from multiple sources have commented on trends and expectations. The Semantic Web Blog provides some highlights here:
- From Accenture Anayltics’s new Big Success With Big Data report: There remain some gaps in what constitutes Big Data for respondents to its survey: Just 43 percent, for instance, classify unstructured data as part of the package. That option included open text, video and voice. Those are gaps that could be filled leveraging technologies such as machine learning, speech recognition and natural language understanding, but they won’t be unless executives make these sources a focus of Big Data initiatives to start with.
- From Teradata’s new survey on Big Data Analytics in the UK, France and Germany: Close to 50 percent of respondents in the latter two countries are using three or more data types (from sources ranging from social media, to video, to web blogs, to call center notes, to audio files and the Internet of Things) in their efforts, compared to just 20 percent in the UK. A much higher percentage of UK businesses (51 percent) are currently using just a single type of new data, such as video data, compared with France and Germany, where only 21 percent are limiting themselves to one type of new data, it notes. Forty-four percent of execs in Germany and 35 percent in France point social media as the source of the new data. About one-third of respondents in each of those countries are investigating video, as well.
KForce (a recruiter) is looking for a director of taxonomy and information architecture. The job duties include:
- “Oversee ongoing enhancement of our information architecture and the schema, including taxonomy / ontology development.
- Manage, train, mentor, and recruit a growing team of analysts and QA specialists.
- Manage multi-stage QA process for verifying the integrity of data scraped and then coded from the internet.
- Monitor labor market trends, and develop rules to account for the emerging jobs, skills and credentials within our taxonomies.
- Leverage published data series and other third party information to inform and validate data curation activities.
- Develop and implement coding enhancement initiatives based on their utility to products, research efforts and clients.
- Implement automated data coding and quality control procedures
- Work with a team of software developers to automate data coding and quality control procedures, to embed data innovations effectively within products, and to support the planning and implementation of a robust data warehouse infrastructure.”
Kevin Fitchard of Gigaom recently posted that, “Thanks to the popularity of its FireChat hyperlocal messaging app, Open Garden’s networking software has been downloaded into more than 5 million mobile devices around the world. Open Garden believes it now has enough users out there to execute the next the stage of its plan: it wants to use all of these smartphone nodes to create a new network for the internet of things. This concept probably requires some explaining as it doesn’t fit into any of the other IoT networking schemes we’ve written about in the past. Unlike say your smart home, which uses a hub to aggregate a bunch of Zigbee or Wi-Fi connections, or a connected vehicle fleet, which taps into the cellular network, Open Garden’s IoT network would be created through millions of shared connections owned by you, me or anyone else with one of its apps on their smartphones, tablets or PCs.”
A recent announcement on EurekAlert! states: “Researchers from North Carolina State University have developed artificial intelligence (AI) software that is significantly better than any previous technology at predicting what goal a player is trying to achieve in a video game. The advance holds promise for helping game developers design new ways of improving the gameplay experience for players.”We developed this software for use in educational gaming, but it has applications for all video game developers,” says Dr. James Lester, a professor of computer science at NC State and senior author of a paper on the work. “This is a key step in developing player-adaptive games that can respond to player actions to improve the gaming experience, either for entertainment or – in our case – for education.” The researchers used “deep learning” to develop the AI software. Deep learning describes a family of machine learning techniques that can extrapolate patterns from large collections of data and make predictions. Deep learning has been actively investigated in various research domains such as computer vision and natural language processing in both academia and industry.”