Sean McGregor is a machine learning safety researcher and founding director with the Digital Safety Research Institute at the UL Research Institutes. Prior to joining Underwriters Laboratories, Dr. McGregor launched the AI Incident Database while training edge neural network models for the neural accelerator startup Syntiant. With an applications-centered research program spanning reinforcement learning for wildfire suppression and deep learning for heliophysics, Sean has covered a wide range of safety critical domains. Outside his day jobs, Sean's open source development work has earned media attention in the Atlantic, Der Spiegel, Wired, Venture Beat, Vice, and O'Reilly while his technical publications have appeared in a variety of machine learning, HCI, ethics, and application-centered proceedings.
Social Impact Work
I am very active in the development of open source code and organizations for social impact. Most recently I led the engineering and management of the AI Incident Database, which is a collection of AI harms and near harms realized in the real world. The database is funded by private foundation donors and is the sole project of the Responsible AI Collaborative. The Digital Safety Research Institute, wherein I am a director, continues to engineer the database through the efforts of Kevin Paeth and his exceptional crew of developers. I also previously developed the Privly Foundation, which was dedicated to online privacy education. The foundation's activities included developing Open Source software, technology workshops, and supervising student developers.
Engineering
I strive to cover as much of the machine learning stack as possible from cloud infrastructure to hardware accelerated edge runtimes. Particular strengths of mine are Python, Keras (defining high-level APIs accelerated by hardware), Javascript, React, TensorFlow, CI/CD, and AWS/GCP/Azure. My engineering projects have spanned dataset aquisition, preparation, training, and model analysis tools. A particular engineering joy of mine is building systems (typically on the web stack) explaining the strengths and failings of trained models.
Past Efforts
My foundational post-doctoral work centered on making ultra-low power neural network inference feasible through work at Syntiant. To date, Syntiant has shipped more neural ASICS than any silicon provider in the world for problems ranging from voice interfaces to sensor fusion. I left my full-time position with Syntiant in January 2022 so I could focus on AI assurance, including efforts related to the AI Incident Database and developing a machine learning system testing platform that was acquired by Underwriters Laboratories in 2023. For a full professional history, please view my LinkedIn profile.
Prior to Syntiant, my grad school career covered four distinct areas. First, I developed a simulator for fire, forest growth, timber sales, and weather. Second, I developed visual analytic tools for exploring the policy space of Markov Decision Processes (MDPs), including the wildfire simulator. Since many MDPs are defined by computationally expensive simulators, I next developed a surrogate modeling method that brings interactive specification of policy, reward, and optimization functions to large state space Markov Decision Processes. My final area of focus is Bayesian policy search using the surrogate model I developed.
Curriculum Vitae
My maintained sources about my academic and professional histories are Google Scholar and LinkedIn, respectively. I also give a narrative of my past and present efforts below, including details on my contributions to co-authored works.
= A particular career highlight.
Papers
Book Chapters
Posters without Accompanying Presentations or Papers