A New System to Link Human Knowledge with Machine Data

There have been dozens of experimental design approaches to the problem of connecting human nerves with mechanical or electronic extensions. In addition to “smart prosthetics,” there have been more adventurous attempts, including the effort to connect a monkey’s brain to a computer using a battery powered, self-contained interface that can be inserted in the skull to allow monkeys to type on computers.

Rarely do those experiments result in something original and effective enough to justify a patent, which would allow the inventors to sell the technology to Skynet, the Borg or (possibly) more benign customers.

Even more rare are those patents granted on software that works like a combination of knowledge management, business intelligence and personal reminders: more similar to mindmapping than to the insertion of control wires into the central nervous system.

Two professors from Penn Announced Aug. 23 that they’d been awarded a patent for a collaborative intelligent-agent framework designed to make connections between brain and PC smart enough to allow humans to offload memories onto flash memory, or whatever.

The patent “Agent-based Collaborative Recognition-Primed Decision Making,” describes a framework of software agents called the Collaborative Agents for Simulating Teamwork (CAST), which is designed to permit and help in the development of knowledge-sharing networks made more efficient with an analysis engine called the recognition-primed design model, which is designed to identify, link and combine discussions or content on complementary topics.

The datacenter/brain connection is metaphorical, not physical, however. “R-CAST automates the proactive exchanges of information relevant to the situation, which is continually updated as new information arrives,” according to John Yen, a professor of computer science and engineering at Penn State Behrend. Yen came up with the system with the help of associate professor Xiacong Fan and Shang sun.

Combined, the agents and pattern-recognition don’t so much simulate actual human thinking as collect, identify and describe how to use existing information more efficiently, especially in high-pressure, time-sensitive environments such as a first-responder dispatch or 911 office.

The Office of Naval Research is looking into the approach as a potential model for distributed decision-making teams charged with quick response. High-performing teams respond in ways similar to each other and apparently follow a similar process in their thinking, because they all have similar problems to solve and similar options for response.

The RPD decision-making framework “acts as an intelligent team partner that is able to share information without overloading people and enhances the quality of information by sharing relevant facts,” Yen said.

In practical terms, that means being able to more quickly identify a terrorist leader, for example, by recognizing patterns of behavior or similar interactions or social connections among members of terrorist cells.

“This agent architecture can not only enhance the capabilities of anti-terrorist analysts in identifying terrorist threats, but also pave the way for the next generation of digital assistants that are ‘personalized’ not only for individuals, but also for teams,” according to the paper describing the technique, which was published in the Journal of Cognitive Decision Making.

The system is more knowledge management or BI than direct human-PC interfaces, but it “finds the sweet spot that combines machine intelligence working in tandem with human intelligence,” Yen said.


Image: Leedsdn/Shutterstock.com

Post a Comment

Your email address will not be published.