In this position, you will be responsible for analyzing and optimizing key deep learning (DL) and machine learning (ML) algorithms and applications on current and next generation Intel hardware. In addition, responsibilities working as part of a team that is collaborating on conceiving, researching, and prototyping new machine learning techniques and use cases with the goal of driving Intel growth in this space. This includes both ensuring that leading DL/ML frameworks (e.g., TensorFlow) are taking full advantage of features in our products as well as impacting next generation products by driving technologies to ensure performance leadership on emerging DL/ML applications and use cases. Ideal candidates would have good understanding of state-of-the-art techniques in machine learning and deep learning, performance optimization, and benchmarking, along with a strong understanding of computer architecture. Candidates must also possess strong verbal and written communication skills and the demonstrated ability to work in a demanding team-oriented environment.
You are expected to maintain substantial knowledge of state-of-the-art principles and theories in the space of machine learning, performance optimization and computer architecture in general. You may also participate in the development of intellectual property.
Inside this Business Group
Intel Nervana, leveraging Intel’s world leading position in silicon innovation and proven history in creating the compute standards that power our world, is transforming Artificial Intelligence (AI). Harnessing silicon designed specifically for AI, end-to-end solutions that broadly span from the data center to the edge, and tools that enable customers to quickly deploy and scale up, Intel Nervana is inside AI and leading the next evolution of compute.
US, Arizona, Phoenix; US, Oregon, Hillsboro; US, California, San Diego;