ML Observability group

Includes DevOps, reinforcement learning, and causal inference


ML Observability involves the real-time monitoring and dynamic deployment of ML models. Models respond to data and often the input data is fundamentally different from the data the model is seeing. Weekend advertising data can be very different weekday data. Aggregating all data into one massive training set can destroy important structure in data. ML Observability research involves developing monitoring and observability tools for the real-time deployment of ML models.

Monitoring tools allow for the measurement and logging of data that help people understand the state of their systems. Monitoring is based on gathering predefined sets of metrics or logs.

Observability tools help people visualize and understand the data gathered from the monitoring.

Sub-projects include:

  • Distribution analysis & checks
  • Feature analysis
  • Bias analysis
  • Anomaly detection
  • Model drift and performance checking and alerts
  • Dynamic model selection
  • Explainability

Data Augmentation

The data augmentation subgroup is focused on creating “fake data” that is statistically indistinguishable from real data. This includes image, sound, text, tabular or any data of interest to the researcher. Students interested in this group should be comfortable with the NU Discovery cluster and deep learning. This involves creating fake data (Mostly using deep learning, GANs, transformers, etc.) and evaluating fake data (Statistical and visual tests that try to discriminate real from fake)

Meetings will be weekly for an hour over Teams
Email AI Skunkworks to express interest or join.
Email : aiskunkworks@northeastern.edu
Website : https://neu-ai-skunkworks.github.io/