Abstract:
|
In ongoing efforts to fight human trafficking across the country, front line service agencies and law enforcement work miracles daily to curb the tide of this epidemic. However, more research is needed to keep up with the traffickers so this multi-billion-dollar trafficking business can be disrupted. While sex trafficking accounts for only 19% of persons who experience trafficking it accounts for two-thirds of the estimated annual profit. Focusing on sex trafficking networks in the age of online advertising and COVID will allow researchers to identify networks that hold a large footprint in the illegal industry. To aid in tackling the ever-present problem of what exactly is sex trafficking, this project discusses one of the first steps that was undertaken by a 3-university collaboration. Training predictive models to identify sex trafficking sounds like a great idea but we needed to be able to identify records that were highly likely to be sex trafficking advertisements in order to accomplish the training. Online advertisements were scraped from across the internet and structured for analysis. Eight datasets consisting of 1,000 randomly selected online ads were assigned to fourteen reviewers in pairs. Reviewers were provided with definitions of eight different possible categories for code assignment. One expert reviewer had previously reviewed all records assigned but the category was not provided to the reviewers. Text analytics was performed on the same datasets. This paper provides an overview of the inter-rater and intra-rater reliabilities of this review process.
|