Recorded Future’s unit Insikt Group today revealed the findings of its latest research which shows increasing ad losses driven by advancements and growing accessibility of easy-to-use bot software and other automation solutions.
Improving Automation and Accessibility Drive $100 Billion in Projected Ad Fraud Losses – shows that the growing accessibility of easy-to-use bot software and other automation solutions reduce the technical requirements necessary to conduct ad fraud, making ad fraud a flourishing market.
Ad fraud, which occurs when fraudsters artificially inflate the metrics used to measure ad performance for personal or financial gain, directly impacts advertisers and publishers whose advertising budgets (“ad spend”) and ad revenue are respectively parasitised by fraudsters.
According to Statista, losses from ad fraud are projected to reach $100 billion by the end of 2023.
Ad fraud also damages the credibility of the ecosystem, raising the risk of brand impairment for ad tech companies and other intermediaries that enable programmatic advertising.
The report shares insights and examples of how fraudsters are using different technologies and tactics to commit effective ad frauds, at a large scale.
These include latest advancements in:
- Botnets – networks of devices infected with malware. After users unknowingly download the malware, their devices join a larger network of infected devices controlled by a threat actor.
Bot farms – collections of bots that may be physically centralised: for example, on servers rented from a data centre or homemade “phone farms” operated by threat actor.
Continual improvements in the effectiveness and accessibility of automation offerings are now widening the pool of actors who can conduct ad fraud. This dynamic likely allows low-level fraudsters with little experience or technical expertise to leverage ad fraud en masse:
- Dark web – the report identified a number of dark web and cybercrime-focused clearnet sources where fraudsters discuss which ad fraud tools and TTPs are effective, advertise their own offerings among their peers, and request or provide guidance regarding ad fraud tools and TTPs.
- Open source offers fraudsters access to user-friendly bot software with “out-of-the-box” functionality. Code repositories like GitHub offer ready-made scripts and codes that fraudsters can use to swiftly operationalise bot farms to conduct ad fraud at scale.
- Underground forums – fraudsters on underground forums provide tailored ad fraud guidance in response to requests from less experienced peers.
- YouTube – certain YouTube channels make information on ad fraud readily available for fraudsters, increasing the accessibility of ad fraud as a whole.
Ad fraud’s appeal and accessibility also likely facilitate a convergence of threats, including money-laundering and cybercrime. Fraudsters are also exploiting automated online advertising to facilitate card fraud-funded malvertising attacks.
Some mitigation strategies include the implementation of technical solutions capable of detecting and filtering out IVT, which is indicative of bot activity, but also:
- Seek and promote “information symmetry” to better understand the effectiveness of ad spend and/or authenticity of traffic.
- Identify and address inefficiencies in ad spend by prioritising advertising effectiveness over low cost.
- For publishers, the use of ads.txt and sellers.json can increase transparency for advertisers and help combat ad fraud.
- Employ threat intelligence to better understand — and by extension, mitigate — the threat that ad fraud poses to your organisation.
The impact of ad fraud is likely to increase as a function of the size of the online advertising market as a whole, and artificial intelligence (AI) will likely play a larger role in both conducting and preventing ad fraud.
Please login with linkedin to comment
link