Machine Learning Advances Human-Computer Interaction
March 14, 2017 | University of RochesterEstimated reading time: 9 minutes

Inside the University of Rochester’s Robotics and Artificial Intelligence Laboratory, a robotic torso looms over a row of plastic gears and blocks, awaiting instructions. Next to him, Jacob Arkin ’13, a doctoral candidate in electrical and computer engineering, gives the robot a command: “Pick up the middle gear in the row of five gears on the right,” he says to the Baxter Research Robot. The robot, sporting a University of Rochester winter cap, pauses before turning, extending its right limb in the direction of the object.
A Newscenter series on how Rochester is using data science to change how we research, how we learn, and how we understand our world.
Baxter, along with other robots in the lab, is learning how to perform human tasks and to interact with people as part of a human-robot team. “The central theme through all of these is that we use language and machine learning as a basis for robot decision making,” says Thomas Howard ’04, an assistant professor of electrical and computer engineering and director of the University’s robotics lab.
Machine learning, a subfield of artificial intelligence, started to take off in the 1950s, after the British mathematician Alan Turing published a revolutionary paper about the possibility of devising machines that think and learn. His famous Turing Test assesses a machine’s intelligence by determining that if a person is unable to distinguish a machine from a human being, the machine has real intelligence.
Today, machine learning provides computers with the ability to learn from labeled examples and observations of data—and to adapt when exposed to new data—instead of having to be explicitly programmed for each task. Researchers are developing computer programs to build models that detect patterns, draw connections, and make predictions from data to construct informed decisions about what to do next.
The results of machine learning are apparent everywhere, from Facebook’s personalization of each member’s NewsFeed, to speech recognition systems like Siri, e-mail spam filtration, financial market tools, recommendation engines such as Amazon and Netflix, and language translation services.
Howard and other University professors are developing new ways to use machine learning to provide insights into the human mind and to improve the interaction between computers, robots, and people.
With Baxter, Howard, Arkin, and collaborators at MIT developed mathematical models for the robot to understand complex natural language instructions. When Arkin directs Baxter to “pick up the middle gear in the row of five gears on the right,” their models enable the robot to quickly learn the connections between audio, environmental, and video data, and adjust algorithm characteristics to complete the task.
What makes this particularly challenging is that robots need to be able to process instructions in a wide variety of environments and to do so at a speed that makes for natural human-robot dialog. The group’s research on this problem led to a Best Paper Award at the Robotics: Science and Systems 2016 conference.
By improving the accuracy, speed, scalability, and adaptability of such models, Howard envisions a future in which humans and robots perform tasks in manufacturing, agriculture, transportation, exploration, and medicine cooperatively, combining the accuracy and repeatability of robotics with the creativity and cognitive skills of people.
“It is quite difficult to program robots to perform tasks reliably in unstructured and dynamic environments,” Howard says. “It is essential for robots to accumulate experience and learn better ways to perform tasks in the same way that we do, and algorithms for machine learning are critical for this.”
Jake Arkin, PhD student in electrical and computer engineering, demonstrates a natural language model for training a robot to complete a particular task.
Using Machine Learning to Make Predictions
A photograph of a stop sign contains visual patterns and features such as color, shape, and letters that help human beings identify it as a stop sign. In order to train computers to identify a person or an object, the computer needs to see these features as unique patterns of data.
“For human beings to recognize another person, we take in their eyes, nose, mouth,” says Jiebo Luo, an associate professor of computer science. “Machines do not necessarily ‘think’ like humans.”
While Howard creates algorithms that allow robots to understand spoken language, Luo employs the power of machine learning to teach computers to identify features and detect configurations in social media images and data.
“When you take a picture with a digital camera or with your phone, you’ll probably see little squares around everyone’s faces,” Luo says. “This is the kind of technology we use to train computers to identify images.”
Using these advanced computer vision tools, Luo and his team train artificial neural networks—a technology of machine learning—to enable computers to sort online images and to determine, for instance, emotions in images, underage drinking patterns, and trends in presidential candidates’ Twitter followers.
Artificial neural networks mimic the neural networks in the human brain in identifying images or parsing complex abstractions by dividing them into different pieces and making connections and finding patterns. However, machines do not convey actual images as a human being would see an image; the pieces are converted into data patterns and numbers, and the machine learns to identify these through repeated exposure to data.
“Essentially everything we do is machine learning,” Luo says. “You need to teach the machine many times that this is a picture of a man, this is a woman, and it eventually leads it to the correct conclusion.”
Cognitive Models and Machine Learning
If a person sees an object she’s never seen before, she will use her senses to determine various things about the object. She might look at the object, pick it up, and determine it resembles a hammer. She might then use it to pound things.
“So much of human cognition is based on categorization and similarity to things we have already experienced through our senses,” says Robby Jacobs, a professor of brain and cognitive sciences.
While artificial intelligence researchers focus on building systems such as Baxter that interact with their surroundings and solve tasks with human-like intelligence, cognitive scientists use data science and machine learning to study how the human brain takes in data.
“We each have a lifetime of sensory experiences, which is an amazing amount of data,” Jacobs says. “But people are also very good at learning from one or two data items in a way that machines cannot.”
Imagine a child who is just learning the words for various objects. He may point at a table and mistakenly call it a chair, causing his parents to respond, “No that is not a chair,” and point to a chair to identify it as such. As the toddler continues to point to objects, he becomes more aware of the features that place them in distinct categories. Drawing on a series of inferences, he learns to identify a wide variety of objects meant for sitting, each one distinct from others in various ways.
This learning process is much more difficult for a computer. Machine learning requires subjecting it to many sets of data in order to constantly improve.
One of Jacobs’ projects involves printing novel plastic objects using a 3-D printer and asking people to describe the items visually and haptically (by touch). He uses this data to create computer models that mimic the ways humans categorize and conceptualize the world. Through these computer simulations and models of cognition, Jacobs studies learning, memory, and decision making, specifically how we take in information through our senses to identify or categorize objects.
“This research will allow us to better develop therapies for the blind or deaf or others whose senses are impaired,” Jacobs says.
Page 1 of 2
Suggested Items
Keysight EDA, Intel Foundry Collaborate on EMIB-T Silicon Bridge Technology for Next-Generation AI and Data Center Solutions
04/30/2025 | BUSINESS WIREKeysight Technologies, Inc. announced a collaboration with Intel Foundry to support Embedded Multi-die Interconnect Bridge-T (EMIB-T) technology, a cutting-edge innovation aimed at improving high-performance packaging solutions for artificial intelligence (AI) and data center markets in addition to the support of Intel 18A process node.
Machine Vision: MVTec Expands Deep Learning Portfolio with New Versions of its Deep Learning Tool
04/29/2025 | MVTec Software GmbHThe machine vision industry is gaining significant momentum by using deep learning, a subset of artificial intelligence, which allows for the automation of entirely new applications and improved results.
Airbus Built Forest Monitoring Satellite Biomass Successfully Launched
04/28/2025 | AirbusThe Airbus built forest monitoring satellite Biomass has been successfully launched into orbit. A European Space Agency (ESA) flagship mission, Biomass will use its revolutionary P-band synthetic aperture radar instrument to measure forest biomass to assess terrestrial carbon stocks and fluxes to enable scientists to better understand the carbon cycle and its effects on climate change.
Asia/Pacific AI Spending to Reach $175 Billion by 2028, Driven by GenAI Boom
04/25/2025 | IDCAccording to the IDC Worldwide AI and Generative AI Spending Guide, the Asia/Pacific region, including China and Japan, is experiencing unprecedented growth in Artificial intelligence (AI) and generative AI (GenAI) investments, spanning software, services, and hardware designed for AI-driven systems.
It’s Only Common Sense: Selling to Engineers
04/28/2025 | Dan Beaulieu -- Column: It's Only Common SenseSelling to engineers is an art and a science. It requires a tailored approach that respects their mindset and professional priorities, provides data, demonstrates expertise, and solves problems. Here’s how to master the art of selling to engineers.