Historic datasets (from 2014 onwards) for the .nl TLD. Datasets are available in JSON format.
Datasets cover information about:
- DNS
- Domain Names
- Query Type
- Response Codes
- IPv6 Support
- Resolvers
- Location
- Number of IP addresses
- Validating Resolvers
- Popular Networks
- Port Randomness
- DNSSEC
- Validating Queries
- DANE
- Used Algorithms
- Mail
- Mail Resource Records (RRs)
- SPF Information
The AMP-Research project collects information about amplification vectors in protocols including reproduction possibilities. For each vector, the port and protocol are listed, as well as, the amplification factor. A scanning script or payload for scanning with zmap is included too.
APNIC REx shows general information about IPv4 and IPv6 usage and delegations. It features on overview of all AS connections. This is the replacement of the earlier vizAS tool.
monocle
is a command line tool that is part of the BGPKIT. It has various modes to help with and around BGP including integration with Cloudflare Radar. monocle whois
provides AS and organization information. monocle time
converts between different time formats like RFC3339 and Unix timestamp. monocle radar
interacts with Cloudflare Radar.
An open-source software framework for live and historical BGP data analysis, supporting scientific research, operational monitoring, and post-event analysis.
BGP streams are freely accessible and provided by Route View, RIPE, and BGPmon.
BGP Stream is a free resource for receiving alerts about hijacks, leaks, and outages in the Border Gateway Protocol.
BGP Steam provides real-time information about BGP events. It includes information about affected IPs, ASNs, and even a replay feature how the BGP announcements changed.
BGPlay shows a graph of the observed BGP routes. It allows replaying historical BGP announcements and displays route changes.
Documentation
GitHub
The BGP hijacking observatory lists potential BGP hijacks. It can observe different kinds of hijacks, e.g., shorter path or more specific prefix. It lists the hijacking time, potential victims and attackers, and the affected prefix.
More details about the different hijacking methods are in the AIMS-KISMET presentation.
Overview of datasets, monitors, and reports produced and organized by CAIDA. Also contains links to other datasets.
Censys performs regular scans for common protocols (e.g., DNS, HTTP(S), SSH). Provides a search for TLS certificates.
Access is free, but requires registration. The website no longer provides free bulk access. Bulk access requires a commercial or a research license. The free access is limited to 1000 API calls per day.
@InProceedings{censys15,
author = {Zakir Durumeric and David Adrian and Ariana Mirian and Michael Bailey and J. Alex Halderman},
title = {A Search Engine Backed by {I}nternet-Wide Scanning},
booktitle = {Proceedings of the 22nd {ACM} Conference on Computer and Communications Security},
month = oct,
year = 2015
}
Cloudflare Radar is Cloudflare's reporting website about internet trends and general traffic statistics. The website shows information about observed attacks and attack types and links to the DDoS report. General traffic statistics are reported, such as the used browser, fraction of human traffic, IP, HTTP, and TLS version.
The website also provides more detailed information on domains and IP addresses. Domains have information about age, popularity, and visitors. IP addresses have ASN and geolocation information.
More information about Cloudflare Radar is available in the introduction blog post.
The Radar data is also available via API, for example the attack data: https://developers.cloudflare.com/api/operations/radar_get_AttacksLayer3Summary
Collection of "bad" packets in PCAPs that can be used for testing software.
The Common Crawl project builds an openly accessible database of crawled websites. The index can be searched.
Provides an outdated list of different Cyber Thread Intelligence Feeds of other organizations.
DMAP is a scalable web scanning suit which supports DNS, HTTPS, TLS, and SMTP. It works based on domain names and crawls the domain for all supported protocols. The advantage over other tools is the unified SQL data model with 166 features and the easy scalability over many crawling machines.
The DNS Census 2013 consists of about 2.5 billion DNS records collected in 2012/2013. The data is gathered from some available zone files and passive or active DNS collecting. The DNS records are written into CSV files containing one DNS record per line.
DNS Coffee collects and archives stats from DNS Zone files in order to provide insights into the growth and changes in DNS over time.
The website includes information such as the size of different zones. It tracks over 1200 zone files.
It provides searching through the zones files based on domain names, name servers, or IP addresses. It can also visualize the relationship between a domain, the parent zones and the name server in what they call a "Trust Tree".
The DNS Core Census is an ICANN project to gather information about top-level-domains (TLDs). This covers ccTLDs, gTLDs, effective TLDs (like co.uk
), and entries in arpa
. The census contains information about the zone, like metadata and contractual information, about the name servers, about addresses of the name servers, and the route origins. The data is kept for a 35-day rolling window.
Further information about the project can be found in this presentation and OCTO-019 from ICANN's Chief Technology Officer
Browser-based DNS resolver quality measurement tool. Uses the browser to generate many resolver queries and tests for features they should have, such as EDNS support, IPv6, QNAME Minimization, etc.
This test is also available as a CLI tool: https://github.com/DNS-OARC/cmdns-cli
Analyze DNSSEC deployment for a zone and show errors in the configuration.
Gives an overview of DNSSEC delegations, response sizes, and name servers.
GitHub: https://github.com/dnsviz/dnsviz
The website has an online test, which performs DNS lookups. These DNS lookups test if certain resource records are overwritten in the cache. The tool can then determine what DNS software is used, where the server is located, how many caches there are, etc.
Test name server of zones for correct EDNS support.
Shows the trust dependencies in DNS. Given a domain name, it can show how zones delegate to each other and why. The delegation is done between IP addresses and zones.
The project used to monitor the first root KSK key rollover. Now it contains the paper "Roll, Roll, Roll your Root: A Comprehensive Analysis of the First Ever DNSSEC Root KSK Rollover" describing the experiences of the first root KSK rollover
Additionally, it includes a tester for DNSSEC algorithm support, which shows the algorithms supported by the currently used recursive resolver. It provides statistics about support for DNSSEC algorithms. It has a web-based test to test your own resolver and provides a live monitoring using the RIPE Atlas.
DNSSEC algorithms resolver test
This dataset covers approximately 3.5 billion DNS queries that were received at one of SURFnet's authoritative DNS servers from Google's Public DNS Resolver. The queries were collected during 2.5 years. The dataset contains only those queries that contained an EDNS Client Subnet.
The dataset covers data from 2015-06 through 2018-01.
DOI Identifier
Tool to replay DNS queries captured in a PCAP file with accurate timing between queries. Allows modifying the replay, like changing IP addresses, speeding up or slowing down the queries.
The website hosts the zone data for a couple of DNS zones, mainly some ccTLDs. This provides a good starting point for zone file analysis together with other sources.
DNS network capture utility. Similar in concept to tcpdump, but with specialized options for DNS.
Historical DNS database. Contains information recorded at recursive resolver about domain names, first/last seen, current bailiwick. Allows to see the lifetime of resource records and can be used as a large database.
Historical information about the reachability of root and some TLD name servers.
The Internet Society published maps showing the distribution of IPv6 support worldwide. The maps are available also with historic data, but are only updated sporadically. More current maps and CSV files are available on the mailing list.
Regularly updated reports about the current DNSSEC deployment. Contains information per TLD and global distribution.
Top-like utility showing information about captured DNS requests. It shows information about the domains queries, the types, and responses.
Driftnet watches network traffic, and picks out and displays JPEG and GIF images for display.
This is an improvement on Paris traceroute and the classical traceroute. It can detect changing routes and detect NATs along the path.
Tracker Radar collects common third-party domains and rich metadata about them. The data is collected from the DuckDuckGo crawler. More details are in this blog post.
This is not a block list, but a data set of the most common third-party domains on the web with information about their behavior, classification and ownership. It allows for easy custom solutions with the significant metadata it has for each domain: parent entity, prevalence, use of fingerprinting, cookies, privacy policy, and performance. The data on individual domains can be found in the domains directory.
FD.io is a very fast userspace networking library, which allows creating programs for packet processing. While DPDK allows fast read and write access to the NICs, FD.io is focussed on processing the packets. Possible use cases are a packet forwarder, implementing a NAT or a VPN.
More details also in this APNIC blog post: https://blog.apnic.net/2020/04/17/kernel-bypass-networking-with-fd-io-and-vpp/
Flamethrower is a small, fast, configurable tool for functional testing, benchmarking, and stress testing DNS servers and networks. It supports IPv4, IPv6, UDP, TCP, DoT, and DoH and has a modular system for generating queries used in the tests.
This dataset contains the responses to DNS requests for all forward DNS names known by Rapid7's Project Sonar. Until early November 2017, all of these were for the 'ANY' record with a fallback A and AAAA request if necessary. After that, the ANY study represents only the responses to ANY requests, and dedicated studies were created for the A, AAAA, CNAME and TXT record lookups with appropriately named files.
The data is updated every month. Historic data can be downloaded after creating a free account.
3D map showing submarine cables and the backbone network of Hurricane Electric.
ICANN tracks the general health of the DNS ecosystem and related ecosystems. The data is updated irregularly, but historic data is available. The collected data covers eight major topics:
- M1: inaccuracy of Whois Data
- M2: Domain Name Abuse
- M3: DNS Root Traffic Analysis
- M4: DNS Recursive Server Analysis
- M5: Recursive Resolver Integrity
- M6: IANA registries for DNS parameters
- M7: DNSSEC Deployment.
- M8: DNS Authoritative Servers Analysis
Each topic has too many sub categories to list here.
These websites have lists of abusive IP addresses. They can be checked with a web form, or some websites also provide a feed.
The cheatsheet describes in few words what the different subcommands of ip
do. It includes some other helpful networking commands for arping
, ethtool
, and ss
, and provides a comparison with the older net-tools commands.
Historical dataset about IP to ASN mappings.
Historical dataset about IP to ASN mappings.
IP geolocation services feeding itself from geolocation databases, user provided locations, and most importantly, active RTT measurements based on the RIPE Atlas system. It also provides a nice API to query the location. It provides a breakdown on where the results stem from and how much they contribute to the overall result.
RIPE Report
Per continent, region, or country measurements of IPv6 deployment and preference. Allows to access historical data.
APNIC Report
Per continent, region, or country measurements of IPv6 deployment and preference.
A curated list of IPv6 hosts, gathered by crawling different lists. Includes:
- Alexa domains
- Cisco Umbrella
- CAIDA DNS names
- Rapis7 DNS ANY and rDNS
- Various zone files
Access to the full list requires registration by email.
Based on the paper "Scanning the IPv6 Internet: Towards a Comprehensive Hitlist".
The website contains the additional material of the IMC paper Clusters in the Expanse: Understanding and Unbiasing IPv6 Hitlists. The IPv6 addresses can be downloaded from the website. The website has three lists, responsive IPv6 addresses, aliased prefixes, and non-aliased prefixes. Additionally, the website also has a list of tools used during the data creation.
RIPE gathers data about the IPv6 deployments worldwide and publishes the information on their IPv6 RIPEness website. The deployments are judged on four points:
- Having an IPv6 address space allocation or assignment from the RIPE NCC
- Visibility in the Routing Information Service (RIS)
- Having a route6 object in the RIPE Database
- Having a reverse DNS delegation set up
Contains a list of pricing information for different IXPs.
The Internet Health Report reports on significant disruption events between networks. They use BGP and traceroutes as their data sources. The report contains information about the connectives of ASes, such as the most common upstream networks and RPKI status of announcements. Link quality information is included, like historic network delay, forwarding anomalies, or network disconnects.
Maps of measurements done with the RIPE Atlas.
The Internet Society gathers data to show the general health and availability of the internet. They measure four categories: internet shutdowns, technology use, resilience, and concentration. Under internet shutdowns, they show which countries are performing what kind of disruption, e.g., regional or national. The technology sections lists basic statistics about HTTPS, IPv6, TLS, DNSSEC.
"Is BGP safe yet?" is an effort by Cloudflare to track the deployment of RPKI filtering across different ISPs. They provide a tester on the website with which each user can test if the current ISP is filtering RPKI invalid announcements. The website includes a list of networks and if and how they use RPKI (signing and/or filtering).
More details for this project can be found in Cloudflare's blog or on the GitHub project.
Contains a list of UDP-based protocols, which can be used for amplification attacks.
The Packet Clearing House (PCH) publishes BGP data collected at more than 100 internet exchange points (IXP). The snapshot dataset contains the state of the routing tables at daily intervals.
PCH also provides raw routing data in MRT format. These contain all the update information sorted by time.
The RIS is the main resource from RIPE featuring all kinds of datasets about AS assignments and connectivity.
Routeviews is a project by the University of Oregon to provide live and historical BGP routing data.
Contains information about the state of the RFCs and what kind of information they contain.
The website shows links to different looking glasses which provide either traceroute information or are usable as route servers.
These projects either operate DNS based Real-time Blackhole Lists (RBL) or allow checking if an IP is contained. The Multi-RBL websites are helpful in finding a large quantity of RBLs.
Mutually Agreed Norms for Routing Security (MANRS) is an initiative to improve the state of routing security. The observatory shows what kind of incidents occurred and how prepared networks are, e.g., with filtering and coordination efforts. The data is available globally and comparisons between regions are available. Historic data is accessible on the website.
The Measurement Factory performed a study of open DNS resolvers between 2006 and 2017. The website has an archive of daily reports, which each list the number of open resolver per ASN.
The website provides a tool to select a list of autonomous systems with a fairer probe distribution. Probes are not distributed equally, but rather cluster based on population. This leads to large biases towards western locations and certain autonomous systems. The website offers different distance metrics. The output is a list of autonomous system numbers for use in the RIPE Atlas API.
The mini internet project is part of the curriculum by the Networked Systems Group of ETH Zurich. It teaches the students the basic steps of how to create a mini internet. It starts with the basics of intra-network routing, by setting up multiple L2 switches. Then the students have to configure L3 routers to connect multiple L2 sites together. Lastly, in a big hackathon style, the students need to connect their local network with the network of the other students, by properly configuring BGP routers and setting up routing policies.
The code and the tasks are all available in the GitHub repository.
The APNIC Blog has a nice introduction to the project too.
Multi-level MDA-Lite Paris Traceroute is a traceroute tool, which understands and learns more complex network topologies. Often times, the network is not just a line, but multiple paths are possible and chosen at random.
A good description of the tool can be found in the RIPE Labs post or in the IMC 2018 paper.
The website presents the ongoing work of measuring the MPTCP deployment. Aggregated statistics for IPv4 and IPv6 are shown. Access to raw data is available, after a free email registration.
The website shows ongoing DDoS attacks in real time. Attacks are shown with source and destination country. They have further information, such as the used protocols and attack bandwidth.
The Netlab of 360.com provides some open data streams.
One dataset concerns the number of abused reflectors per protocol.
Overview of IP addresses scanning the internet and which ports are scanned.
Open INTEL is an active DNS database. It gathers information from public zone files, domain lists (Alexa, Umbrella), and reverse DNS entries. Once every 24 hours, data is collected about a bunch of DNS RRsets (SOA
, NS
, A
, AAAA
, MX
, TXT
, DNSKEY
, DS
, NSEC3
, CAA
, CDS
, CDNSKEY
). The data is openly available as AVRO files and dates back until 2016.
The data can be freely downloaded. There is documentation on the layout of the AVRO files.
The project is similar to Active DNS but seems to be larger in scope.
Open Traffic Generator (OTG) is an open standard, specifying a declarative and vendor neutral API for testing Layer 2-7 network devices and applications (at any scale).
PEERING is an environment where researchers and educators can play with BGP announcements in a real but sandboxed environment.
Description from the website:
The long-term goal of the PEERING system is to enable on-demand, safe, and controlled access to the Internet routing ecosystem for researchers and educators:
- PEERING for researchers. Today, it is hard for researchers to conduct Internet routing experiments. To perform a routing experiment, a research institution has to obtain Internet resources (IP addresses and ASNs) and establish relations with upstream networks. PEERING eliminates these obstacles and provides researchers controlled on-demand access to the routing ecosystem.
- PEERING for educators. Educators can use the PEERING infrastructure in teaching students the Internet routing architecture. The students access to live BGP sessions to multiple ISPs.
This is an improvement on the traditional traceroute program. It can detect multiple distinct routes and display them accordingly. The classical traceroute would produce weird results on changing network routes.
Another similar program is Dublin traceroute.
Passive DNS dataset from circl.lu.
Traceroutes can be difficult to understand. PathVis visualizes the network connections of your computer. It creates a tree of network nodes, with the root being the PathVis computer. The tree shows the paths to the other endpoints the computer is talking too.
The blog post introduces PathVis and explains the motivation behind it.
Contains information for some networks about peering information. This includes peering partners, transfer speeds, peering requirements and similar.
Documentation
The public suffix list gives a way to easily determine the effective second level domain, i.e., the domain which a domain owner registered and which can be under different owners.
The website tracts the deployment of Registration Data Access Protocol (RDAP) for all TLDs. RDAP is the successor to whois and offers structured and machine-readable data.
RIPE operates a set of probes, which can be used to send pings or similar measurements. The probes are mainly placed in Europe, but some are also in other continents.
All the collected measurements can be found in the RIPE Atlas Daily Archives. The blog post gives some more details.
The repository contains code for a better probe selection for the RIPE Atlas measurement system. Probes are not distributed equally, but rather cluster based on population. This leads to large biases towards western locations and certain autonomous systems. The goal of the repository is to find a more equal, thus fairer probe selection.
RIPEstat is a network statistics platform by RIPE. The platform shows data for IP addresses, networks, ASNs, and DNS names. This includes information such as the registration information, abuse contacts, blocklist status, BGP information, geolocation lookups, or reverse DNS names. Additionally, the website links to many other useful tools, such as an address space hierarchy viewer, historical whois information, and routing consistency checks.
The Route Origin Validation (ROV) Deployment Monitor measures how many AS have deployed ROV. It uses PEERING for BGP announcements and uses BGP monitors to see in which ASes the wrong announcements are filtered. A blog post at APNIC describes it in more detail.
These websites allow you to browser the valid RPKI announcements. They show which address ranges are covered by RPKI and who the issuing authority is.
Website, which tests, if your provider filters invalid announcements using RPKI.
https://catalog.caida.org/details/paper/2019_learning_regexes_extract_router
https://catalog.caida.org/details/paper/2020_learning_extract_use_asns
These two papers focus on how to extract information from the hostname of routers. These hostnames occur when performing traceroutes. The regexes can be used to extract identifiers and AS numbers. The generated datasets of the papers are openly accessible.
The site contains two PDFs showing the relationship between the DNS RFCs. There is a simplified overview and a full overview. The graphs are regularly updated. The RFCs are organized by time, by topic area (e.g., DNSSEC, RR), and the RFC status (e.g., Standard, Best Current Practice).
Different information regarding reachability and connectivity of ASes.
The Shadowserver Scanning projects performs regular Internet wide scans for many protocols. The dashboard shows the gathered data about botnet sinkholes, Internet scans, honeypots, DDoS, and IoT data. This includes information about the size of botnets, the number of IP addresses with open ports like MySQL, the botnets as seen by honeypots, or the used protocols for DDoS attacks.
The blog post provides an introduction to the new dashboard.
The Shadowserver Scanning projects performs regular Internet wide scans for many protocols. They scan for four main types of protocols:
- Amplification protocols, e.g., DNS or NTP
- Botnet protocols, e.g., Gameover Zeus or Sality
- Protocols that should not be exposed, e.g., Elasticsearch, LDAP, or RDP
- Vulnerable Protocols, e.g., SSLv3
The website is a great resource to get general statistics about the protocols, like the number of hosts speaking the protocol, their geographic distribution, associated ASNs, and the historic information.
Shodan performs regular scan on common ports.
Access is free, but requires registration. More results can be gained with a paid account.
The TLD Apex History is an ICANN project to gather DNSSEC related records for all TLDs.
Further information about the project can be found in this presentation.
User guide for the newer ip
command under Linux. The guide consists of different tasks one might want to perform and their corresponding ip
commands.
The website contains different tcpdump
filters. It starts with basic filters and then builds up ever more complex ones. This is a good source for looking up complicated filters, if one does not want to write them themselves.
TeleGeography provides different maps of the Internet. They contain information about submarine cables, global traffic volume, latency, internet exchange points. The data for the Submarine Map and the Internet Exchange Map can also be found on GitHub in text format.
These services allow you to create a domain name for any IP address. The IP address is encoded into the domain name. An overview of different services can be found here.
Online Services
https://nip.io/ provides IPv4 only
- Supports both
.
and -
separators. 10.0.0.1.nip.io
resolves to 10.0.0.1
192-168-1-250.nip.io
resolves to 192.168.1.250
customer1.app.10.0.0.1.nip.io
resolves to 10.0.0.1
magic-127-0-0-1.nip.io
resolves to 127.0.0.1
https://sslip.io/ provides IPv4 and IPv6
- Supports both
.
and -
separators. - Provides the ability to use the service with your own branding.
192.168.0.1.sslip.io
resolves to 192.168.0.1
192-168-1-250.sslip.io
resolves to 192.168.1.250
www.192-168-0-1.sslip.io
resolves to 192.168.0.1
–1.sslip.io
resolves to ::1
2a01-4f8-c17-b8f--2.sslip.io
resolves to 2a01:4f8:c17:b8f::2
https://ip.addr.tools/ provides IPv4 and IPv6
- Supports both
.
and -
separators. 192.168.0.1.ip.addr.tools
resolves to 192.168.0.1
192-168-1-250.ip.addr.tools
resolves to 192.168.1.250
www.192-168-0-1.ip.addr.tools
resolves to 192.168.0.1
2a01-4f8-c17-b8f--2.ip.addr.tools
resolves to 2a01:4f8:c17:b8f::2
Self-hosted Options
- hipio is a Haskell service for IPv4.
Yarrp is an active network topology discovery tool. Its goal is to identify router interfaces and interconnections on internet scale. Conceptually, this is similar to running many traceroutes and stitching them together into one view. However, traceroutes are designed to understand the connection between two hosts and do not scale easily.
Different utilities for network scanning. Most importantly, the zmap component, which is a packet scanner for different protocols. It also contains other tools like ways to iterate over the IPv4 address space and denylist/allowlist management.
Ziggy is a tool to inspect the RPKI ecosystem at arbitrary points in the past. It is developed by NLnet Labs. More details about the ziggy tool can be found in the announcement blog post.
The website provides download access to domains in many TLDs. Most lists are updated daily. However, not all the lists seem complete. For example, DENIC reports that they manage over 17 million domains, whereas zonefiles.io only reports over 6 million domains.
dn42 is a big dynamic VPN. It employs various Internet technologies, such as BGP, whois, DNS, etc.
Users can experiment with technology, they normally would not use in a separated environment.
Mostly different hackerspaces participate in the dn42 network, such as different locations of the CCC.
DNS performance measurement tools.
Dnsthought lists many statistics about the resolvers visible to the .nl-authoritative name servers. The data is gathered from the RIPE Atlas probes. There is a dashboard which only works partially.
Raw data access is also available.
IODA is a project by CAIDA to use different data sources to detect macroscopic internet outages in real-time. It measures the internet activity using BGP, darknets, and active probing. The website provides a real-time feed and a historical view of outages.
The nPrint project is a collection of open-source software and benchmarks for network traffic analysis that aim to replace the built-to-task approach currently taken when examining traffic analysis tasks.
The nmap stylesheet converts the nmap XML output into a nice website. A sample report can be found under this link.
pyNTM allows creating a network with circuits between layer 3 nodes. This model then allows simulating and evaluate how traffic will traverse the topology. This can be used to test different network topologies and fail over scenarios.
DNS responses gathering and differences analysis toolchain.
This is a tcpdump-like program for printing TLS SNI and HTTP/1.1 Host fields in live or captured traffic.
A traceroute like tool, that detects where a path crosses an IXP.
WireDiff is a debugging tool to diff network traffic leveraging Wireshark.
Wirediff lets you open 2 network traces packets side-by-side. You can select a packet from each trace and diff their content at the protocol level you want.
A more thorough introduction is available in the APNIC blog: https://blog.apnic.net/2020/07/01/wirediff-a-new-tool-to-diff-network-captures/.