Enterprise Cyber Fraud
by wolfshirts
Common Cyber Fraud
There are many types of cyber fraud this post looks at common automation tactics. When we think of cyber fraud, we often think of the human element, phishing, malware, smishing, whaling, etc. We don’t usually stop to think about the automated side of things.
Often large enterprises will run up against a mixture of these attacks:
-
ATO - Account Take Over
ATO attacks are fairly common. There can be many reasons an attacker has interest in an account, sometimes an account provides leverage into infrastructure. Other times valid credit card information is stored in the account which can be used to purchase items. In some cases account generation is heavily controlled and the attacker needs a large number of accounts for arbitage.
-
Credential Stuffing
Credential stuffing is common with attackers. Due to the large amount of data leaks attackers will often attempt to re-use creds across multiple domains and sites.
-
Carding
Can be found anywhere with credit card validation. This type of attack is the regular checking of cards to validate whether or not they are viable. They’re often resold later in bulk.
-
Scraping
Scraping isn’t necessarily an attack, but scraping can be used to determine when items have been misposted, or used by competition to set prices. It can also be used to gather information from competition, or individuals. Constant scraping can pick up mistakes where information has been posted, but was not ready to be public.
-
Automated Purchasing
Automated purchasing for arbitage is a big number industry. Using automation to buy up hot items and resell them is a common practice.
-
Information Discovery
Automation to discover endpoints, routes, resources. This can also be used to discover information disclosure from specific resources, including but limited to customer details, employee details, phone numbers, email addresses, valid logins, etc.
Damage Done
These attacks can do damage to a companies reputation, and they can do damage to the bottom line. Infra spend on automation can be significant. Creating accounts can require SMS validation creating massive charges. Supply shortages in addition to re-sale of goods can lead to extreme price gouging.
Scraping alone can run up infrastructure costs substantially and amplify the effect of a human mistake. The damage comes in constant compute costs to cover the scraping, and the cost of a mistake. Frequently this is listing a product at below the expected price or free, leaving data available where it shouldn’t be, or draft posts of private information. This can also lead to additional infrastructure discovery.
Automated purchasing is often done in large amounts and leads to supply shortage, and a massive mark up on the product.
Detection
Most large companies have some sort of automation problem and are simply unaware of it.
-
Day/Night Cycle:
Does the traffic fluctuate properly or is it a flat line. Traffic which is flat is generally an indicator of automation.
-
Look on an endpoint by endpoint basis:
Identify juicy endpoints and examine the traffic. Is there a day night cycle? what kind of response codes are you seeing? What behaviour would you expect to see on that endpoint? Is there a reason to be targetting this endpoint?
-
Number of requests over time:
Are you seeing very high number of requests in a short time span? Is this even humanly possible? Is there a reason this might be happening (poorly coded API calls) Is the traffic flat over time? Is one IP/ASO regularly making 2 requests per minute for 24 hours a day? Does the timing seem automated.
-
User trace:
What flow does the traffic take through the site? Does it take an expected flow, or are you seeing one user going for /login for 2 hours straight getting 400 codes followed by a 200 and /accept then back.
-
Details on the traffic:
What ASO’s does traffic come from? What are the IP’s? Are the IP’s coming out of data centers? Does one session cookie get shared across many IP’s for regular requests? What are the user-agents? Do those user-agents deviate from normal traffic? Is most of your suspicious traffic 1 user agent? Is it scattered across many different datacenters to avoid detection?
-
What information is available about the client:
This is similar to details on traffic, but the more information you can get about a client the better. These attacks are frequently distributed across multiple ASO’s, residential proxies, and regions. Generally an attacker will spread the same client configuration across all of their compute resources. This means that the more information you can get, the better you will be at identifying the attacker.
-
OSINT:
Are people talking about automation on your services on the web? Googling your company for company name and scraping/automation/botting etc can lead to the discovery of actors targetting your company, and their specific configurations.
-
Is there money to be made through automation:
This can be harder to quantify. If you’re a company like ticketmaster your adversaries will be fairly advanced. This is due to the money to be made, generally if there is money to be made through automation attacks against your company you are likely aware of it.
The more money in play, the more advanced your adversaries will be. With large amounts of money to be made, you will find your adversaries methods often are similar to your own engineering team. You are likely to see SaaS solutions to your detections. There will be numerous providers, there will be a whole chain of services related. Some will actually be illegal/criminal enterprise, and some will be operating in a grey area.
Of course there’s also hidden angles, maybe your company has a really lax endpoint that allows attackers to validate credit cards with little to no effort. This is a juicy target, but it’s less likely to have an entire ecosystem built around it.
Mitigation:
Mitigation is going to be incredibly dependant upon a large number of factors. The common thought process is the best mitigations increase difficult, which increase time and spend. The more you can get your attackers to spend, or the more difficult it is to automate against your resources, the more likely they are to move to a softer target.
This is true to a point.
If you are in an industry that sees massive resale values out of the automation your attackers have likely already created an ecosystem around selling automation tooling that works against your enterprise. Once these actors are stuck in it’s incredibly difficult to disaude them. They will be accruing large amounts of money from SaaS that defeats your mitigations, and likely living on it. For example https://www.capsolver.com/. This is a persistent group selling work arounds to mitigations.
If you are in an industry that controls valuable infrastructure, you will likely not be able to easily disuade attackers. If what you have is a juicy enough target, they will continue to push. In some cases they will be sponsored by nation states, in others they will be sponsored by organized crime.
Both of the above will require a very forward and iterative security posture. The more changes you can make at once, the better. The more you can add complexity the better. Expect mitigations to be overcome in novel ways.
AI mitigations might be defeated by an attacker generating so much bad traffic on the endpoint that the AI thinks that this is the correct way to use the endpoint, and it stops mitigating the endpoint. Aggregate detections and mitigations can take time to aggregate the various data points they need. An attacker may discover this and resort to large burst attacks which are faster than your defenses work. Attackers may pivot to other endpoints, or discover other weaknesses in your org allowing them to continue their attack, but without your mitigations in place.
tags: botting - automation - threat - retro