One Platform. 20 Algorithms. Three Layers of Intelligence

Each day, OnCorps runs over 20 algorithms in production for the largest capital markets firms in the world. We deliver advanced AI in three layers. The bottom layer is a secure AI suite. As an AWS Advanced Partner with OpenAI approval for zero data retention, we uphold stringent SOC1 and ISO 27001 standards, ensuring your operations are both advanced and secure. The middle layer is comprised of pre-trained algorithms. Finally, our customers can share applications leveraging our pre-trained algorithms.

Platform Layers

Pre-Trained Algorithms
AI Integration Suite
The AI Integration Suite

Much attention is given to the potential of AI, but little is said about its execution challenges. OnCorps has integrated key components to significantly reduce execution hurdles. Our AI Integration Suite makes it easier to extract data from many sources into common formats. It also enables rapid reconciliation of data to help in curation and incident detection. In addition, more sophisticated agents and algorithms can be modeled and tested in an integrated manner.

OnCorps Programmable Pipeline Tool

The programmable pipeline tool is a collection of utilities we apply to extract data from documents and electronic files. The OnCorps platform rapidly extracts and converts data to the same formats so they can be easily reconciled.

OnCorps Reconciliation Builder

The reconciliation builder makes it easy to configure very specific checks across two or more tables. OnCorps uses this tool to build incident detection checkers, which are  sophisticated checks searching datasets that can identify features that caused past incidents.

OnCorps Algorithm Manager

The algorithm manager leverages Databricks to build models, compare models, and manage their lifecycles through the creation of sophisticated data structures that enable us to share pre-trained algorithms securely.

OnCorps AI Testing Manager

We take algorithm testing seriously. Prior to launch we thoroughly test algorithms to ensure they are performing at acceptable levels. There are three levels of testing: the first is comparing algorithms to select the most accurate models; second, we perform red-team and human-in-the-loop testing to ensure our algorithms are catching past issues and are seen as valuable to humans; finally, we can also establish real world criteria of decisions people make by using our decision simulation tools - once this human performance baseline is set, we can see if our algorithms perform better.

Read more about our other platform layers

Pre-Trained Algorithms


See a demo