Labs

Our AI Labs

BO LAB

Leveraging a combination of supervised and unsupervised learning models to tackle complex back-office processes within the financial services sector. Our expertise lies in developing a robust, multi-node message exchange layer that facilitates seamless communication between front-office and back-office systems, as well as between internal departments and external entities.

SWIFT LAB

We are working on utilizing advanced supervised learning models to streamline and simplify the intricate mapping layers involved in SWIFT message exchanges. Our cutting-edge transformer technology meticulously curates SWIFT message content, enabling it to train, customize, and dynamically deploy the exchange interface. This ensures a highly efficient, adaptive, and automated messaging framework for financial services operations.

ALPHA LAB

We are developing apps for portfolio managers to optimize their portfolio by leveraging LLM and ML offered by NVIDIA Frameworks. Using high-performance computing, we are developing apps that enable portfolio managers with portfolio risk decomposition, stress tests, simulations, slice & dice and look-through.

GLYPHFILE

Harness the power of large language models to develop applications that generate essential business documents, including business requirements, functional specifications, test packs, architecture designs, and project proposals. Our tools empower business analysts and test analysts by providing domain-specific knowledge and expertise, enabling them to effectively analyze and solve business problems.

BO LAB

Prompt Engineering using Chain-of-thought (C-O-T) technique

Chain-of-thought (CoT) prompting is a technique enabling large language models (LLMs) to tackle problems through a series of intermediate steps before arriving at a final answer. By prompting the model to break down a multi-step problem into sequential reasoning steps, CoT prompting enhances the model’s reasoning capabilities. This method helps LLMs handle complex reasoning tasks that necessitate logical thinking and multiple steps, such as arithmetic or commonsense reasoning questions.

Swift Lab

Text-to-Text Omni Channel Messaging Layer

Using Retrieval-augmented generation (RAG) technique, a two-step process where a system first retrieves relevant documents and then uses a Large Language Model (LLM) to generate an output based on this information. The initial step involves document retrieval using Swift documents, market practice guides, in-house swift examples, dense embeddings, which means encoding both the query and the documents into vectors and finding the closest matches. This retrieval process can use various database formats, such as vector databases, summary indexes, tree indexes, or keyword tables.

Once the most relevant documents are selected, the LLM generates a response by incorporating information from both the query and the retrieved documents. This is an effective for handling specific or dynamic information that wasn’t included in the model’s original training. Additionally, RAG employs “few-shot” learning, where the model uses a small number of examples, often automatically retrieved, to inform its outputs.