The Presentations

Tracking Period Apps

Wendy Edwards and Jacqueline Xavier

We propose to analyze the behavior of women's health apps using virtualization with static and dynamic code analysis. At this point, there have been multiple news articles pointing out potential privacy concerns with period tracking apps, but little systematic low-level analysis to find out exactly what they do. If we succeed, we believe that lessons learned from our work could help privacy researchers investigating other domains.

Do You Trust Your Cryptocurrency Exchange? Market Abuse as a Security Problem

Presented by Hayden Melton, from his joint research with Vasilios Mavroudis

Order matching systems form the backbone of modern equity, cryptocurrency and commodity exchanges used by millions of investors daily. Thus, their operation is usually strictly controlled through numerous regulatory directives, to ensure that markets remain fair and transparent (with the exception of cryptocurrencies). Despite these efforts, predatory traders frequently come up with new sophisticated mechanical arbitrage techniques that exploit the complexity of those systems to gain an advantage over other market participants. In particular, predatory traders do not rely on any trading strategy and do not hack the exchange's computers. Instead, they find tiny bugs, obscure corner-cases and inefficiencies in order matching systems, and spin them so that they gain an advantage over everyone else.

Our goal is to study the security order matching systems and examine the technical underpinnings of common fairness and transparency problems in cryptocurrency exchanges. For the first time, we treat mechanical market abuse as a security problem, and investigate how manipulation techniques exploit the underlying market infrastructure. The existing literature focuses on finance and legal aspects of these problems (in equity markets), but we are taking completely new approach and look into the security assumptions, properties and the adversarial models of electronic exchanges. Our preliminary work (see attached document) discusses various such attacks that are used in modern and regulated equity markets. Their technical component is impressive. The adversaries are well-resourced and even use their own network of microwave-tower links to save milliseconds in transmission latency. If you are curious see als o "high-frequency trading", but our scope here is broader.

The fun part: If these are attacks are taking place in strictly-regulated equity markets, what about unregulated cryptocurrency exchanges? How toxic is trading in those venues? Our main goal is to survey several popular cryptocurrency exchanges and report our findings to help exchanges mitigate them and investors to protect themselves.

We believe that this is a completely overlooked type of hacking (We actually don't think anyone has ever referred to it as 'hacking' until now), and we are very excited to work on it.

UEFI Research

Dion Blazakis

You might have heard about all the computers in your computer. It’s true. Believe the hype. Most of them are programmable, too. In this talk, we’ll discuss the computers in your computer and a few approaches to enumerating them on a given system. After enumeration, we’ll explore the attack surface available from the host kernel along with a few examples of what you might find once you pull back the covers. Finally, we’ll discuss why code execution and persistence on these other components is becoming increasingly attractive for an attacker. You’ll never buy a used computer again (or, at least, you’ll feel bad about it when you do).

Symbolic fuzzing

Nicolas Braud-Santoni

Property-based testing, of which fuzzing is a popular example in security testing, is a powerful methodology for semi-automatically finding bugs in software, providing great effectiveness. In particular, fuzzing can find memory violations in C/C++ programs, which are often exploitable, potentially leading to remote code execution, or arbitrary memory reads.

A core limitation of current property-based testing approaches is that many code paths are very unlikely to be reached by random inputs; for instance, many image-processing libraries validate checksums in image headers before doing anything else, resulting in random testcases being discarded.

The conventional solution is to either modify the system-under-test to disable input validation, or write an input generator by hand, which is problematic for two reasons:

It is a usability and (human) performance issue, as the engineer needs to become aware they are getting poor coverage, investigate why, design an application-specific workaround, and implement it. When modifying the system being tested, or when using a handcrafted generator, one risks missing bugs presented in the system: for example, bugs in the input validation code will not be found if it is patched out, or if the fuzzer is constrained to only generate valid inputs.

I propose an alternative, which I dubbed symbolic fuzzing: when a code path is under-covered, generate the input conditions leading up to it (using symbolic execution techniques), and then sample random inputs fulfilling those conditions (using automated constraint-solving technology such as SMT). Assuming the SMT solver does find satisfying testcases, this guarantees we will hit all possible code paths without having to modify the system-under-test or writing application-specific code.

Rain Drop, Drop Top, Cooking up Memes for a /Pol/ Bot

Sophia D’Antoine and Ian Roos

Programmatically defined viral content needs to end.

Purchasers of bot video views only need to buy enough traffic to reach a critical mass where it becomes defined as trending or viral content at which point it blows up with organic traffic. This enables bot campaigns by limiting the attack surface with which the bot campaign could be identified, (by time / volume). The value these promotion platforms provide is negligible compared to the degree to which they enable invalid traffic.

Even if someone becomes popular in trending it will be immediately devalued by viewers and advertisers if there is the perception that they didn’t arrive there organically. The pervasiveness of these campaigns are already damaging similar ecosystems, (twitch for example has a huge bot viewer inflation problem).

Content aggregators (reddit, hacker news etc) already exist. If we force adversaries to use these platforms to inflate their campaigns it will double their attack surface and make it that much easier to detect these operations. Whereas they currently exist entirely within a single platform that will leverage a single detection methodology to profile their behavior. If we can force the adversary to have two enemies it becomes that much harder for them to operate.

Traditionally you would have humans defining content that should and does become “viral” When you replace this with a robot you create a surface that is asking to be gamed, especially when you financially incentivize it. It is by definition inorganic.

If it is good content it probably doesn’t need a robot to automatically promote it.

If content blows up organically when the bot promotes it would it have become popular organically? No it’s just necessary to expose users to the promoted content for IO to work, they don’t even need to deliberately engage, it just needs to get shoved in their face.

We expose trends in online automated bot and manual influence operations for Ad fraud and political gain.

Mining Disputed Territories: Studying Attacker Signatures for Improved Situation Awareness

Juan Andres Guerrero-Saade

Proactive defense is a matter of situational awareness. It involves hunting on what are sometimes anemic threads and deriving actionable context. Whenever defenders share information with the community, we jump on the opportunity to build off of each other’s work. Why not do the same with our attackers?

In recent years, we’ve become increasingly aware that defenders (conventionally defined) are not the only ones interested in cyber situational awareness. High-end malware families employ defensive measures to avoid operating on victim boxes already infected by ‘friends or foes’. The attackers are trying to avoid the possibility of getting burned alongside another attacker’s noisier toolkit, conflicting with a friendly operation, or perhaps more complex fourth-party collection dynamics that make their intelligence collections operations vulnerable to piggybacking.

For us, studying attacker dynamics in-the-wild represents an opportunity to piggyback on the situational awareness of organizations situated to view APT-conflicts from an entirely different vantage point, from the trenches of shared victimology. As defenders, we have an obvious remit to turn all possible insights into actionable defense for the internet ecosystem as a whole to be better defended.

This talk will explore insights into how attackers monitor one another, revealing the blindspots of previously unknown operations and actor dynamics, and expanding our defensive capabilities against our common foes.

Anchor Baby

Ang Cui and Jatin Kataria

Novel Idea: We present previews to two separate research projects. First, novel method of reliably manipulating FPGA functionality through bitstreamanalysis and modification while circumventing the need to perform RTL reconstruction. The use of our methods of manipulation creates numerous possibilities in the exploitation of embedded systems that use FPGAs. Second, we present the reverse engineering process of a popular IoT babytech device. Please note that the public disclosure date for this vulnerability is June 15th. Accordingly, we have scheduled this talk for the 15th.

Limitations: RTL reconstruction from FPGA bitstream is a complex problem. RTL reconstruction without intimate knowledge of the specific FPGA hardware design is currently infeasible.

Impact: These bitstream manipulation techniques have numerous practical applications for persistent FPGA implants, physical destruction of embedded systems, attacks against FPGA-based systems like software-defined radios, advanced driver assist modules in cars, weapon/guidance systems, etc. The second project makes the point that why we should not put our babies on the internet.

Worm Charming: Harvesting Malware Lures for Fun and Profit

William MacArthur

(This talk should be by: Pedram Amini, but life happens.)

It's no secret that client-side attacks are a common source of compromise for many organizations. Web browser and e-mail borne malware campaigns target users by way of phishing, social engineering, and exploitation. Office suites from vendors such as Adobe and Microsoft are ubiquitous and provide a rich and ever-changing attack surface. Poor user awareness and clever social engineering tactics frequently result in users consenting to the execution of malicious embedded logic such as macros, JavaScript, ActionScript, and Java applets. In this talk, we'll explore a mechanism for harvesting a variety of these malware lures for the purposes of dissection and detection.

Worm charming (grunting or fiddling) is an increasingly rare real-world skill for attracting earthworms from the ground. A competitive sport in East Texas, most worm charming methods involve some vibration of the soil, which encourages the worms to surface. In our context, we'll apply a series of YARA rules to charm interesting samples to the surface from the nearly 1M files uploaded to Virus Total daily. Once aggregated, we'll explore mechanisms for clustering and identifying "interesting" samples. Specifically, we're on the hunt for malware lures that can provide a heads up to defenders on upcoming campaigns as adversaries frequently test their lures against AV consensus.

VR/AR Research Demonstration

Tom Cross and Constantine Alexis

In what is totally not a product demo, Tom and Constantine will show us a product that they're working on. They've been pretty reliable with providing entertaining presentations in the past and want some feedback, so get ready to yell it at them.

Hack...no...Protect, all the Things..?": A Researcher's Candid memoir of 4 Years running a Indy Infosec Product Company. Trying to Make the World of "Things" Safer”

Stephen Ridley

For the first time, Stephen shares these experiences...imagine the motivational talks of all your favorite snake-oil, self-help, (failed) business visionaries...your most egotistical self-aggrandizing TEDx speakers...replete with 0days.

Fuck RSA

Ben Perez

RSA is terrible. You should stop using it. Seriously. Stop using RSA. Ben will elaborate. We can't wait to hear what the drinking word is for this one.

Sliver

Joe DeMesey and Ronan Kervella

Sliver is a cross-platform general purpose implant framework written in Golang designed to be an open source alternative to Cobalt Strike. Sliver supports asymmetrically encrypted C2 over DNS, HTTP, HTTPS, and MutualTLS using per-binary X.509 certificates signed by a per-instance certificate authority. Sliver supports multi-player mode with any number of operators.

Technical Description: We're trying to build a viable open source alternative to Cobalt Strike, with cryptographically secure C2.

Sliver uses an embedded version of the Go compiler to dynamically nerate implant binaries with per-binary X509 certificates, per-binary obfuscation, and per-binary DNS canaries (unique domain strings that are deliberately not obfuscated and the server trigger alert if it's ever resolved, indicating the implant has been discovered). Other features include TCP tunnels, procedurally generated HTTP C2 messages, automated Let's Encrypt integration, and more.

Also supports several Windows-specific post-exploit features such as: user token manipulation in-memory .NET assembly execution, process migration, and privilege escalation features.

The design is based on a prototype network pivoting tool we built called 'Rosie the Pivoter': https://github.com/moloch--/rosie