The Internet of Things (IoT) and Intelligent Autonomous Machines (IAM) Track will focus on the power of NVIDIA® Jetson™ TX1, in connected applications and as an energy-efficient solution for intensive processing in an embedded form factor. See the latest advances in drones, robotics, and intelligent video analytics, and learn how GPUs are powering autonomous navigation, facial recognition, behavioral analysis, and related applications.
Chief Software Architect
Sensor Processing at Sea: Remoteness Demands High Performance
Brain-in-a-Box: A Unified Perception and Navigation Framework for Mobile Robots, Drones and Cars
IFM Technologies: Intelligent Flying Machines for Indoor Applications
IFM TECHNOLOGIES, Founder
We'll present recent advancements in leveraging the GPU on-board IFM Technologies' "Intelligent Flying Machines." IFM is providing industrial, indoor, flying platforms for data-driven decisions in the manufacturing and logistics industry. IFM provides a complete framework to collect, visualize, and leverage three-dimensional data analysis in indoor environments. Using the onboard GPU, IFM Technologies takes innovative production and logistics technology to a -- quite literally -- new dimension.
ABOUT THE SPEAKER: Marc Gyongyosi is a junior in computer science at Northwestern University in the McCormick School of Engineering. For the past two years, he has been working closely together with BMW's robotics research department to develop novel robotic systems assisting workers at BMW factories. At BMW, Marc's primary research focus is the implementation and development of cooperative lightweight robots. At Northwestern's The Garage, Marc is involved in two startups: at MDAR Technologies, he works on a novel 3D vision system for self-driving cars and other autonomous vehicles. As the founder of IFM Technologies, he develops novel "Intelligent Flying Machines," i.e., Drones for Decisions. IFM Technologies aims to increase productivity and improve efficiency in everyday manufacturing and logistics processes.
NEURALA, INC, CEO
Mobile robots, drones, and self-driving cars need advanced and coordinated capabilities in perception and mobility to co-exist with humans in complex environments. To date, the most effective "machines" built for these tasks come to biology. Max Versace, CEO of Neurala and director of the Boston University Neuromorphics Lab, will explain how mobile robots, drones, and cars can use GPUs coupled with relatively inexpensive sensors, today available in the sensor pack of a common smartphone, to enable machines to sense and navigate intelligently their environment. The talk will illustrate the working "mini-brain" that can drive a ground robot to learn, map, and understand the layout of the environment, objects in it, while avoiding collisions.
ABOUT THE SPEAKER: Massimiliano Versace is the CEO of Neurala Inc. and founding director of the Neuromorphics Lab at Boston University. He is a pioneer in researching and bringing to market large-scale, deep learning neural models that allow robots to interact and learn in real time in complex environments. Max has authored approximately 40 journal articles, book chapters, and conference papers, holds several patents, and has been an invited speaker at dozens of academic and business meetings, research and national labs, and companies, including NASA, Los Alamos National Labs, Air Force Research Labs, HP, iRobot, Samsung, LG, Qualcomm, Ericsson, BAE Systems, Mitsubishi, ABB, and Accenture, among others. He is a Fulbright scholar and holds two Ph.D.s: experimental psychology, University of Trieste, Italy, and cognitive and neural systems, Boston University, USA. He obtained his B.S. from the University of Trieste, Italy.
LIQUID ROBOTICS, Chief Software Architect
We built an autonomous robot that roams the oceans carrying whatever sensors the customer provides. The ocean is an extreme environment: between salt water, hurricanes, communication challenges, and the need to be at sea for months at a time, the engineering has been an exciting challenge. Key to meeting that challenge has been having significant onboard computing resources. There are multiple communication channels that must be arbitrated between. Something is always failing and must be coped with. Autonomy can get complex, especially when collision avoidance kicks in. The machine is on it's own, many, many miles from the nearest human. At the same time, it is a part of an end-to-end application that includes significant cloud processing and fleet operations.
ABOUT THE SPEAKER: James Gosling received a B.S. in computer science from the University of Calgary, Canada in 1977. He received a Ph.D. in Computer Science from Carnegie-Mellon University in 1983. The title of his thesis was "The Algebraic Manipulation of Constraints". He spent many years as a VP & Fellow at Sun Microsystems. He has built satellite data acquisition systems, a multiprocessor version of Unix, several compilers, mail systems and window managers. He has also built a WYSIWYG text editor, a constraint based drawing editor and a text editor called `Emacs' for Unix systems. At Sun his early activity was as lead engineer of the NeWS window system. He did the original design of the Java programming language and implemented its original compiler and virtual machine. He has been a contributor to the Real-Time Specification for Java, and a researcher at Sun labs where his primary interest was software development tools. He then was the Chief Technology Officer of Sun's Developer Products Group and the CTO of Sun's Client Software Group. He briefly worked for Oracle after the acquisition of Sun. After a year off, he spent some time at Google and is now the chief software architect at Liquid Robotics where he spends his time writing software for the Waveglider, an autonomous ocean-going robot.
Enabling autonomous drones with real time computer vision applications
Director of Research
Implementing Deep Learning for Video Analytics on Tegra X1
Assistant Professor of Aeronautics and Astronautics
Development of Affordable Persuasive Autonomous Vehicles for Dense Urban Areas
HERTA SECURITY, Director of Research
The performance of Tegra X1 architecture opens the door to real-time evaluation and deployment of deep neural networks for video analytics applications. This session presents a highly optimized, low-latency pipeline to accelerate demographics estimation based on deep neural networks in videos. The proposed techniques leverage the on-die hardware video decoding engine and Maxwell GPU cores for conducting advanced video analytics such as gender or age estimation. Our results show that Tegra X1 is the right platform for developing embedded video analytics solutions.
ABOUT THE SPEAKER: Dr. Javier Rodriguez Saeta is CEO of Herta Security, which he founded in 2009. He received M.S. and Ph.D. in electrical engineering from the Universitat Politecnica de Catalunya in 2000 and 2005, respectively. He received a B.A. in business administration from the Open University of Catalonia, and an MBA from ESADE Business School. In 2000, he worked for Robert Bosch, GmbH, in Hildesheim, Germany. In 2001, he joined Biometric Technologies, in Barcelona, Spain, where he was the R&D manager. He has published more than 20 papers indifferent magazines and workshops, and holds three patents. His main research interests include all issues related to innovation, security, and biometric systems and applications.
Enabling autonomous drones with real time computer vision applications
Real time computer vision applications are the missing link to unlock a world of new applications for drones. From autonomous inspection missions for power lines or railroads, to advanced filming features such as follow-me and position correction - the need for embedded computer vision to create autonomy for drones is rapidly growing. Percepto has developed the embedded platform with drones oriented SDK that enables most drones today to use these type of capabilities. We'll go over the need, challenges and solutions that enable new types of disruptive autonomous drones solutions.
ABOUT THE SPEAKER: Dor Abuhasira is the CEO and co-founder of Percepto, a leading provider of real time computer vision technology and applications for drones (sUAV's). Dor has lead embedded computing hardware design at Netoptics and before that as a technology leader at ECI-telecom. Dor was responsible for the company's next generation GPON platforms developed for Tier 1 customers like British Telecom and Deutsche Telecom. Dor has a B.Sc. with honors in Electrical engineering from Ben-Gurion University in Israel.
MIT, Assistant Professor of Aeronautics and Astronautics
Autonomous vehicles hold the potential to disrupt urban mobility and logistics. In particular, autonomy-enabled ride sharing systems can reduce transportation delays and emissions, while enhancing safety. This talk will outline a number of projects focused on developing autonomy-capable vehicles, including the development of an autonomous persuasive tricycle for dense urban centers (in partnership with Taiwan and Andorra), the development of fully-autonomous electric cars for ride sharing systems (in partnership with Singapore), and the development of cars with advanced safety features (as a part of MIT's collaboration with Toyota). The talk will focus on the first project, in which the MIT Media Lab, the Laboratory for Information and Decision Systems, and the Center for Logistics and Transportation partnered to evaluate current technology capabilities on a low cost multifunctional lightweight electric autonomous vehicle. This vehicle is intended to give the user a glimpse into the future shared use autonomous vehicle in urban environment that serve both people and goods. We will also outline the use of GPU-based computing technologies that are among the key enablers for this system.
ABOUT THE SPEAKER: Sertac Karaman is the Charles Stark Draper Assistant Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology (since Fall 2012). He has obtained B.S. degrees in mechanical engineering and in computer engineering from the Istanbul Technical University, Turkey, in 2007; an S.M. degree in mechanical engineering from MIT in 2009; and a Ph.D. degree in electrical engineering and computer science also from MIT in 2012. His research interests lie in the broad areas of robotics and control theory. In particular, he studies the applications of probability theory, stochastic processes, stochastic geometry, formal methods, and optimization for the design and analysis of high-performance cyber-physical systems. The application areas of his research include driverless cars, unmanned aerial vehicles, distributed aerial surveillance systems, air traffic control, certification and verification of control systems software, and many others. He is the recipient of an Army Research Office Young Investigator Award in 2015, National Science Foundation Faculty Career Development (CAREER) Award in 2014, AIAA Wright Brothers Graduate Award in 2012, and an NVIDIA Fellowship in 2011.