Here are some recent projects:
Dr. Fred Hickernell, Dr. Sou-Cheng Choi and I came together to organize this project, which works to bring quasi Monte Carlo methods to a broader audience by encapsulating cutting edge technologies into a Python library. Quasi Monte Carlo methods improve on standard Monte Carlo methods by replacing the i.i.d. sampling of points with a specially structured sampling of points. Aleksei Sorokin and Dr. Jagadees Rathanival have worked hard to implement methodologies and work with experts in other communities to identify opportunities for quasi Monte Carlo methods to shine. Check out our blog, and reach out if you are interested in learning more about quasi Monte Carlo.
An increasingly important theme in the AI/ML community is to reach outside of the standard ML conference participants and find opportunities for AI/ML to be applied more broadly. To this end, Dr. Harvey Cheng and I worked with materials scientists Dr. Paul Leu and Dr. Sajad Haghanifar to use active learning and Bayesian optimization to find high performing glass fabrication strategies with minimal experimentation.
Dr. Kyle Emich and Dr. Li Lu invited me to join a project involving studying team dynamics. In particular, we wanted to devise a strategy to incorporate individual-level attributes across team members to identify how the alignment between two or more attribtues can be used to predict team-level outcomes. We created multiple mechanisms for computing alignment based on mathematical/physical concepts which are not generally present in the business/psychology literature in which we are studying this situation.
Check out our article on how this alignment can be defined for two attributes. We are currently working on new research to push this to higher numbers of attributes.
Dr. Greg Fasshauer and I worked together for a year and a half to write up a research textbook to help bring together the research on kernel methods in the numerical analysis and spatial statistics communities. Part of the purpose was to help unify notation, as possible. Part is to provide a computational implementation (in this case, in Matlab, to speak to graduate students). Part is to discuss recent exciting advances, such as the Hilbert-Schmidt SVD and Generalized Sobolev spaces. Future work will include applications of Gaussian processes (such as active learning and Bayesian optimization) and will probably include a Python implementation as well as the Matlab implementation.
In many circumstances, customers may be able to state preferences for certain outcomes without being able to quantify those preferences. For example, I may prefer Pepsi over Coke, but I cannot provide a numerical representation for that. This prevents standard optimization tool from being used to identify the most preferred outcome, even though that is surely the outcome which should be sought. Ian Dewancker and I worked to develop some software to apply Gaussian processes in latent space to effect such a search.
If you are feeling very generous, or very bored, feel free to check our book.
Copyright © 2022