AI in River Security

River Security uses Artificial Intelligence (AI) to support business automation, development, incident response, penetration testing, and other selected operational processes.

Our AI policy specifies what data can be sent where, in which scenarios, including examples of usage, via our internal data classification policy. This policy dictates what types of data should be protected with what kinds of measures. The different types are listed below:

  • Public
  • Internal use only
  • Customer confidential
  • Company confidential

We do not share customer confidential data or confidential company information with third-party AI providers. All such data is handled in accordance with our internal data classification and tagging policies. Where AI processing is required, it is performed exclusively through self-hosted AI models operated within our controlled infrastructure in Germany (Hetzner). To help guarantee appropriate AI models are used we deploy a LLM proxy technology which enables administrators to control which AI integrations are allowed to use which models, effectively allowing us to deny access to third-parties when customer confidential data is in play. This system is also where our guardrails are deployed.

Examples of customer confidential data:

  • Pentest reports
  • Incident Response reports
  • Active Focus findings, security observables, leaks, credentials, and other security impacted data
  • Customer data which is internal (customer can give explicit approval)

In specific cases, such as Crystal Box penetration testing, customers may provide explicit consent to use external frontier AI models such as Anthropic or OpenAI. This is strictly limited to scenarios where such tools improve the identification of vulnerabilities and risks in source code or related assets.

River Security maintains an internal AI governance policy defining approved use cases, controls, and safeguards across the organization, mapped up to our data classificationpolicy.

 

How AI is used

Since the inception of River Security, we have been at the forefront to support speed, accuracy and continuity in testing. To ensure this we have developed a robust penetration testing methodology described here: Penetration Testing Methodology

As per our methodology, Artificial Intelligence is for us considered as "yet another tool", albeit a strong one. Tools are not the defining factor of a penetration test, however they are useful to lean on in terms of making sure all vulnerabilities are found. For us AI is heavily in use in many of our day to day processes.

Providing access to AI models is done via a centralized point within River Security where administrators enable or disable which models can be used, from which providers, including enabling access to our locally hosted AI models.

In certain cases we can leverage third-party AI frontier models but applying data anonymization before sending the data. In other examples we do not send customer identifying data to third-party, but use AI for enrichment without customer data. These techniques are used sparringly considering de-anonymization techniques possibly exist.

Below are examples of AI usage in our platform and team:

Automation

Any good Attack Surface Management (ASM) platform should rely on using the power of LLMs to enrich and contextualize assets such as entities and observables. Below are a couple of examples on how automation is leveraging AI:

  • Data normalization on unstructured data such as TLP green/white threat intelligence. Correlation to existing threat intelligence already collected before.
  • Machine learning models for predicting contextualized data for discovery reasons.
  • Named Entity Recognition models for enrichment purposes
  • Feature discovery for applying penetration testing methodology
  • Language review and methodology conformity
  • Specific penetration testing efforts across different features

Manual Testing

All penetration testers have access to different AI vendors frontier models, and local models, that they can use to support themselves in increasing knowledge about a particular problem or technology, get help in penetration testing efforts, and more.

Keep in mind, under no circumstances should customer confidential data, e.g. data retrieved from an internal penetration test, be sent to a third-party AI provider without the customer's explicit consent.

Assessing source-code at the same time as penetration testing has grown into a very fruitful way to ensure security of a system, and AI models now are particularly good at finding bugs and vulnerabilities in code via source-code review. We encourage customers to integrate their source-code with River Security for the purpose of letting AI find bugs and vulnerabilities, however it is not a requirement and will require explicit consent.

Incident Response

During incident response we use AI to dynamically create playbooks based on the incident and technology mix involved. Under no circumstances will customer specific data from the incident (this is customer confidential) be sent to a third-party AI provider. However, several cases do exist where data is shared with AI providers, for example when we reverse engineer malware the malware will be analyzed by tools which are AI powered