Thank you so much.
Good afternoon, everyone. I think Mr. DeBarber mentioned most of the content that I wanted to share with you, but maybe I'm talking from another perspective, as a researcher. I'm also going share some of my latest findings, which I have already published.
As a short bio, I am Arash Habibi Lashkari, assistant professor in the faculty of computer science at UNB, research coordinator at the Canadian Institute for Cybersecurity and also a senior member of the IEEE.
In the past two and a half decades, I have been involved in different projects related to designing, developing and implementing the next generation of detecting and preventing disruptive technologies in academia and industry.
Actually, on the academic side, I can share with you that I have over 20 years of teaching experience spanning several international universities. On the research side, I have published 10 books and around 90 research articles on a variety of cybersecurity-related topics. I have also received 15 awards in international computer security competitions, including three gold medals. In 2017, I was recognized as one of the top 100 Canadian researchers who will shape the future of Canada. My main research areas are Internet and Internet traffic analysis, malware detection and also threat hunting.
As has been requested here, today I am talking about the dark and deep web and also the dark and deep net, but I'm trying to make it simpler so that it's possible to easily visualize and so that everybody can imagine it.
We have three layers, and the first one, which is the common layer, we call the “surface web”. This is everything that is available and open, everything that can be found as you search the different search engines such as Google, Bing, Baidu and others. We call this the “indexed web”, which means the websites that have been indexed by the search engines.
The second one is the deep web, which is the portion of the Internet that is hidden from the search engines, and we call this “unindexed web”. It includes mainly personal information, such as payment information, medical records and corporate private data, or when, for example, we are using a VPN, a virtual private network, to connect to these contents.
The third one is the dark web, and this portion is certainly hidden from search engines and actually includes the www content that exists on darknets. These websites can be accessible to special software and browsers that allow the users and also the website operators to remain anonymous and untraceable. There are several projects going on here to support the dark net, such as Tor, The Onion Router; I2P, the Invisible Internet Project; and also Riffle, which is the collaborative project between MIT and EPFL in response to the problems we have with the Tor network.
What is the source of the basic darknet? In 1971 and 1972, two Stanford students, using an ARPANET account at the AI laboratory, tried to engage in a commercial transaction with their counterparts at MIT. This means that before Amazon and before eBay, the seminal act of e-commerce was a drug deal, and the students used this network to quietly arrange for the sale of an undetermined amount of marijuana through the precursor to the Internet we know today.
What is the new version of the darknet, or the modern darknet? In 1990 the lack of security on the Internet—and its ability to be useful in tracking and surveillance—became clear, and in 1995 three guys from NRL, which is the U.S. Naval Research Lab, asked themselves if there was any way to create Internet connections that didn't reveal who was talking to whom, even to someone, for example, monitoring the network. The answer was onion routing.
The goal of onion routing was to have a way to use the Internet with as much privacy as possible, and the idea was to route traffic through multiple servers and encrypt it each step of the way, making it completely anonymized.
In 2000, one student from MIT—Roger—had already started to work with one of these guys at the NRL and created a new project named Tor, or The Onion Router. After that, in 2006, another student or classmate joined this team. They received funds from the EFF, and officially in 2006 they opened this non-profit organization.
My latest research results—all of them have been published in 2016, 2017 and 2020—show that it is possible to actually detect users who are connecting to the dark or deep web in a short period of time—around 10 to 15 seconds. Also, we can detect the type of software or application they are using, but from their machine, not from the Internet. From the Internet, everything is completely anonymized, but from the actual user's machine it is possible to detect their activity somehow.
I am completely ready for any question if the committee asks.
Thank you.