Senseye aims to revolutionize the way humans communicate with technology. We are developing a unique, sensory interface technology to create a direct link from a computer to a humans mind via the complex dynamic properties of the human eye. We're looking for a software engineer that loves solving meaningful problems to join our team in downtown Austin. We are a diverse group of people with a broad range of backgrounds, experience, and perspectives who have a lot of work to do and would love to have your help in achieving our visionary goals.
The Machine learning team at Senseye exists to service the development of systems that can identify, externalize, predict, and anticipate human cognitive and affective states and processes with the proximal goal of using this previously unreachable stream of information to inform decision making, enhance safety, and aid in learning - both in the classroom and for the acquisition of skills such as flying a plane or performing surgery (or rarely, both at the same time).
What were looking for
We are looking for a badass Machine Learning Research Scientist to join our growing team! The primary responsibilities of the role are to develop and validate methods, models, and new capabilities that can augment or become production level services or products. As our goal is to understand internal cognitive states given whatever information can be collected from external, observable phenomena, the unifying theme behind the work of the researcher is the development of techniques that improve the quality and diversity of the information we can access. Primarily, this means the application of computer vision (though there is by no means a restriction to vision, other modes can and are open to evaluation) to high resolution, high frame-rate videos of human subjects - both in internal, highly controlled experiments akin to the study of psychophysics, and less controlled tasks similar to the internal experiments and collected via mobile devices or in-browser applications.
Extraction of useful information from videos tends to follow a two-step process: the extraction of features, typically by the application of one or more neural network implementations (usually accomplished with PyTorch, but using alternate frameworks able to achieve comparable results is fine) to transform the spatially and temporally extended video samples into representations amenable to classification, regression, time-series forecasting, and other statistical analyses such as hypothesis testing and general exploration as needed.
Did you know that women apply for open jobs only if they think they meet 100 percent of the criteria listed? Men will apply to that same posting if they feel they meet 60 percent of the requirements.
We know that not everyone comes from the same background, has had the same experiences, or education, and we wouldnt want it any other way. Dont worry about checking every single box, instead we want you to bring your own unique outlook to the team, whatever that might be!