19.02, Webinar Explainable AI for customer service
Case study

4bill improves knowledge flows with AWS machine learning services

Learn how 4bill, a payment platform from Ukraine, increased efficiency with knowledge systems powered by AWS machine learning.


Industry Financial services
Size SMB
Key focus Machine learning
4bill's online presence

4bill.io is a multifunctional payment platform built to simplify and streamline the flow of money between businesses and their customers. With over six years of innovation under its belt, 4bill empowers companies of all sizes to accept payments in a fast, secure, and flexible way.

Serving a diverse range of industries - marketplaces, healthcare, insurance, educational platforms, gaming, ticketing, and more - 4bill offers a comprehensive suite of payment solutions (web, mobile, invoice, QR, widgets, SDKs) all within one integrated ecosystem.

Supported by a strong network of banking partners, a 97% transaction success rate, and an ever-growing team of over 150 professionals, 4bill is redefining what a payment infrastructure can (and should) be.

Opportunity Connecting internal knowledge

In order to create best-in-class solutions capable of competing in a dynamic and saturated market, one first needs the right tools and solutions themselves. Oftentimes, these cannot simply be bought or leased due to the need to process highly sensitive, internal data.

Such was the case with 4bill, which determined that in order to boost productivity, the company needed a better way to handle their scattered internal knowledge systems and the way said knowledge is provided to employees. Data from Confluence, Jira, Slack and others needed to be made accessible through a coherent, unified interface that would yield accurate and reliable answers — not to disjoint queries, but to natural questions.

Systems based on machine learning can accommodate such needs, but 4bill are experts in payment systems, not AI. They needed experts capable of delivering a knowledge-assistant encompassing all pertinent data from their systems — past, present and future, to scale alongside the growing business — and do so while keeping all sensitive data secure.

This is where 4bill reached out to the machine learning experts at Chaos Gears.

Solution Augmenting knowledge flows with machines

Once our teams joined forces and our partner’s needs were clear, we formed an action plan and started developing a demonstration of the future system’s capabilities. For this, we outlined the target architecture of the entire system, but — for demonstration purposes — limited the scope to just one external source, albeit one that is central to 4bill’s internal communication and would also form the centerpiece of the project: Slack.

To prove the concept and validate the planned architecture, we built an ML-powered Slack chatbot driven primarily by AWS’ extensive catalogue of machine learning tools and services. We opted for serverless compute based on AWS Lambda to drive the system in a scalable architecture amenable to rapid development and further expansion.

Centralized knowledge

Within this architecture, Amazon S3 stores documents and artifacts, mostly unstructured. These documents are then fed — alongside data flows from vendor platforms, such as Slack or Jira — to the centerpiece of the solution: Amazon OpenSearch. As a vector database it is responsible for providing a unified view into systematically indexed data from disjoint sources, that can then be consumed by tools higher up in the chain to turn it into natural language containing just the right information at just the right time.

This last step is made possible thanks to a combination of Amazon Bedrock — providing the LLMs necessary to comprehend and process natural language — while augmented by custom machine learning models deployed with Amazon SageMaker, which greatly helps resolve traditional machine learning challenges. Amazon Bedrock Knowledge Bases then ties this together with context-aware data in a retrieval-augmented generation flow. We had to design the system for secure multi-tenancy, with users only able to retrieve information from documents they are individually authorized to view. This initial proof of concept helped us shape a solution with flexible and straightforward metadata ingestion and query-time filtering features provided by Amazon Bedrock Knowledge Bases, in turn powered by Amazon OpenSearch.

Such clusters can be expensive to operate when deployed carelessly. However, we avoided the pitfalls through optimized instance sizing, efficient indexing strategies and properly tuned queries. While these are typical database efficiency challenges and it’s common engineering knowledge they should be tackled — the actual “how” is based on experience and varies from project to project.

4bill's comprehensive platform

Orchestrated knowledge

The architectural choices we make generally stem from industry best practices, but best practices evolve — especially in relatively young domains, such as AI. Plus, the particular needs and challenges of a project often require adjustments. For this to be possible, it is crucial that we work with tools and services that are both robust and adaptable.

With the foundational elements based in AWS’ extensive service list, we used LangGraph to help orchestrate the chatbot’s conversational workflow and manage its state. However, the sheer size of Slack’s SDK is an issue in AWS Lambda’s stateless environment, wherein loading the SDK and rebuilding state on each invocation can exceed Slack’s timeout requirements and result in failed interactions. To overcome this, we used Amazon’s DynamoDB to persist the already processed interaction state outside Lambda’s execution scope, leading to resumption times within Slack’s thresholds.

Continuous quality

Not unlike typical software engineering, machine learning flows also require tuning in response to production data. To be able to improve the quality of the responses the system generates one first needs to be able to fully observe them. Then, given their volume, we also need automated means of evaluating the quality.

As AI systems become increasingly more complex, the challenges inherent in distributed systems become more and more prominent. In this case, the execution flow spans multiple services — from AWS Lambda, through LangGraph, Amazon Bedrock, and finally Amazon OpenSearch — in a serverless environment with multiple points of divergence, which makes it difficult to trace and identify failure points.

To tackle this, we turned to Langfuse, which gave us that much needed observability and formed the basis of a LLM-as-a-judge feedback loop for measuring and improving the quality of our solution’s output. Using this approach, we cut down on manual testing while still ensuring consistent chatbot performance, thanks to the LLM providing automated validation of response accuracy and relevance against expected standards, only requiring human supervision when pertinent deviations occur.

Outcome Ask and receive

With quick and easy access to information available directly through a Slack-based AI as a user-friendly, natural, unified interface already used on a daily basis by employees, 4bill’s teams spend less time searching for answers and can instead focus on impactful, high-value work best suited to their competencies.

The solution delivered by Chaos Gears helps automate routine tasks and provides actionable recommendations based on accurate, reliable and up-to-date information rooted in the company’s collective knowledge base. Overall, this helps streamline knowledge retrieval and cut related costs not just with regards to existing employees — it also cuts down onboarding and training times.

From a technical perspective, the machine learning system is already capable of dealing with a large volume of queries and can easily (and automatically) scale out to accommodate 4bill’s growing business, while the architecture remains cost-effective and open to further expansion.

As 4bill’s teams benefit from improved productivity, efficiency, and decision-making, alongside enhanced experiences, we look forward to further projects together to deepen the integration while our partner works towards redefining the landscape of payment infrastructure providers.

Core tech

We'd love to help you too

Every successful project is unique — as will be yours. Get in touch.