Abstract
Online social media contain valuable quantitative and qualitative data, necessary to advance complex social systems studies. However, these data vaults are often behind a wall: the owners of the media sites dictate what, when, and how much data can be collected via a mandatory interface (called Application Program Interface: API). To work with such restrictions, network scientists have designed sampling methods, which do not require a full crawl of the data to obtain a representative picture of the underlying social network. However, such sampling methods are usually evaluated only on one dimension: what strategy allows
for the extraction of a sample whose statistical properties are closest to the original network? In this paper we go beyond this view, by creating a benchmark that tests the performance of a method in a multifaceted way. When evaluating a network sampling algorithm, we take into account the API policies and the
budget a researcher has to explore the network. By doing so, we show that some methods which are considered to perform poorly actually can perform well with tighter budgets, or with different API policies. Our results show that the decision of which sampling algorithm to use is not monodimensional. It is not enough to ask which method returns the most accurate sample, one has also to consider through which API constraints it has to go, and how much it can spend on the crawl.
for the extraction of a sample whose statistical properties are closest to the original network? In this paper we go beyond this view, by creating a benchmark that tests the performance of a method in a multifaceted way. When evaluating a network sampling algorithm, we take into account the API policies and the
budget a researcher has to explore the network. By doing so, we show that some methods which are considered to perform poorly actually can perform well with tighter budgets, or with different API policies. Our results show that the decision of which sampling algorithm to use is not monodimensional. It is not enough to ask which method returns the most accurate sample, one has also to consider through which API constraints it has to go, and how much it can spend on the crawl.
Original language | English |
---|---|
Title of host publication | 2018 IEEE International Conference on Big Data, BigData 2018 |
Publisher | IEEE |
Publication date | 14 Dec 2018 |
ISBN (Print) | 978-1-5386-5034-9 |
ISBN (Electronic) | 978-1-5386-5035-6 |
DOIs | |
Publication status | Published - 14 Dec 2018 |
Event | 2018 IEEE International Conference on Big Data (Big Data) - Westin Seattle, 1900 5th Avenue., Seattle, United States Duration: 10 Dec 2018 → 13 Dec 2018 Conference number: 6 http://cci.drexel.edu/bigdata/bigdata2018/ |
Conference
Conference | 2018 IEEE International Conference on Big Data (Big Data) |
---|---|
Number | 6 |
Location | Westin Seattle, 1900 5th Avenue. |
Country/Territory | United States |
City | Seattle |
Period | 10/12/2018 → 13/12/2018 |
Internet address |
Keywords
- network analysis
- Social media
- network sampling