Enjoy the complete album of photos from CES at Anderson
UCLA Anderson’s annual CES® event, held in partnership with the Consumer Technology Association (CTA)TM following the world-renowned CES conference in Las Vegas, attracted audiences among the wider community of UCLA students, faculty and staff, as well as industry professionals and business leaders.
The CES at Anderson differentiator is the university context, where rigorous scientific research meets entrepreneurship across disciplines. Professor Mayank Mehta, who teaches in UCLA’s departments of physics and astronomy and neurobiology, and Dr. Ramin Ramezani, assistant adjunct professor of computer science, both lent expertise to panels that explored machine learning, artificial intelligence and virtual reality. Several Anderson alumni chimed in with insights from their positions at companies that include Epson, Sengled and Deepgram. Dyan Decker (’02), PwC’s U.S. forensic technology leader, sat down with Easton Technology Management Center faculty director John Blevins for a conversation about CES in Las Vegas, where the emphasis was on rapidly developing smart home technology and the race to refine virtual assistants.
CES at Anderson featured keynote addresses by Neil Sahota, worldwide business development leader and master inventor in the IBM Watson group, and Ravi Sawhney, founder and CEO of RKS Design. The popular technology exhibitors’ fair proved as lively as ever — despite uncharacteristically wet weather. UCLA made a strong showing with innovations from the Biomechatronics Lab and Supra Studio, as did startups like waterless detail-on-demand service Envi, which was incubated at Los Angeles Cleantech Incubator and hires deaf people to wash cars where drivers direct them through an app. The UCLA Modeling and Education Demonstrations Laboratory brought its interactive augmented reality sandbox, a 3-D topographic map that responds to users’ sculpting of terrain.
MBA students Marcus Barton (’18), Alec Bialosky (’18), Shu He (’18), Joseph Lee (’17) and Sridutt Nayak (’18) and undergraduate Ann Nguyen (B.A. ’18) compiled their impressions of the presentations, reporting that the “universe of possibilities” seems infinite — though, given that even non-tech companies play a key role in advancement of technology, keeping pace with the evolving needs and expectations of 21st-century businesses continues to be a challenge as new technologies blossom.
Machine Learning and Artificial Intelligence
- 2016 was a breakthrough year for AI, machine learning and deep learning, as we finally have the computing power and access to large enough data sets to apply newly developed algorithms. Beyond self-driving cars and image recognition, it’s still unclear what problems that AI/ML will solve next. The challenging part is having the creativity and vision to understand the universe of possibilities for patterns between data sets that humans cannot see but that are detectable by machine learning technologies.
- CES at Anderson panelists said they expect flexibility to be a key component of future AI/ML. Today’s ML technology mostly utilizes single algorithms, and we can expect more powerful insights from future technologies that are capable of deploying multiple algorithms in unison to determine which solutions are most likely to solve a given problem. Major barriers to adoption include providing access to the infrastructure required for large computing requirements, sharing of data sets across the major players (like Google, Facebook, Amazon) with third parties, and a lack of understanding exactly how these technologies can be implemented at a more granular level to solve pain points for businesses.
- Deep learning allows software to write software, illustrating the machine’s ability to learn from data as well as experience, similar to how humans learn. Deep learning has essentially transformed the mature field of algorithmic research. Whereas algorithmic research is hierarchal and built by decisions trees, deep learning is flexible and complex, it exercises and attempts to match all possibilities from a sea of data. Panelist Brian Gamido (’08), head of business at Deepgram, explained: “Deep learning started off with something like recognizing a pet, then faces, then identifying familiar faces, then analyzing nuances between consumers and companies to anticipate needs and wants.”
- Although there are limitless opportunities for DL innovation, there are also current barriers to adoption. In the medical domain, machine learning is dealing with the issue of too much data. Currently, data collection in this field is so large that it’s difficult to visualize what selections would be most effective in solving a specific set of problems. As IBM’s Sahota pointed out, “Medical knowledge doubles every five years.” And machines, though arguably faster at computing solutions than humans, are for the moment not making decisions, they’re just providing information.
- PwC’s Decker said MBAs should look for opportunities in advancing voice recognition software, particularly for children and visually impaired people. Drones, robots and other hardware innovations notwithstanding, she said, software and cloud computing power are really what’s accelerating tech. Although CES is a technology-based convention, there’s a trend of more companies across industries (media, entertainment, travel) taking an interest. Companies are thinking about how to maximize their revenues and new ways to ease the customer experience through technology. “So many industries and businesses are being created from scratch that no one thought of before,” Decker said.
Virtual Reality and the Connected World
- Current wearable VR technology is associated with fatigue and frustration. Inability to get visual feedback from hands and feet can cause feelings of disembodiment and interfere with the brain and its functioning. UCLA’s Professor Mehta explained how the perception of physical space is formed in human brains and said nearly 60 percent of the neurons shut down when a person experiences VR. The other 40 percent, soberingly enough, are located in the same nerve centers that get actuated in Alzheimer’s and epilepsy patients. A much needed “handshake” between the tech and science communities to orchestrate AR and VR development could curtail long-term negative effects. The industry, Mehta said, should push the envelope within limitations.
- Sami Ramly, VR product and program lead at Wevr blamed the delay in widespread adoption of VR on the lack of the consumer content in that space. He said a dearth of mature technology is a barrier for people to create their own content, one of the primary reasons a platform like YouTube succeeded so wildly. Harnish Jani, lead strategic designer and venture architect at BCG Digital Ventures, stressed the need for a robust network to handle data loads, better location tracking and triangulation devices, haptic sensors and high-fidelity cameras to provide high-resolution imagery.
- Epson product manager Michael Levya (’14) said Epson is building a comprehensive ecosystem for the VR domain, from developers to users, instead of adopting the one-off hardware approach taken by many. The company works with its enterprise customers to reduce their operational costs, adding more agility in the operation.
Was there anything there on the next Alexa tech? We can't wait to see what Amazon has got in store for 2018. Voice activated speakers are going to be huge for smart home tech.
Posted by: Edward Onsen | 02/25/2017 at 08:16 AM