Skip to main content

Protecting Children Online: A Comprehensive Guide to Internet Safety, Digital Dangers, and What Families Can Do

Children today grow up online — and that connectivity brings real risks alongside its benefits. From online predators and sextortion to cyberbullying, harmful content, and AI-generated exploitation, the threats children face in digital spaces are serious, well-documented, and evolving rapidly. This guide draws on research from the CDC, FBI, NCMEC, the U.S. Surgeon General, and leading child safety organizations to help families, educators, and professionals understand those threats — and take meaningful steps to protect the children in their lives.

  

The Digital Landscape: How Children Use the Internet Today

Internet access among young people in the United States is now nearly universal. According to a 2024 Pew Research Center survey of 1,391 U.S. teens ages 13–17, 95% have access to a smartphone — up from 73% a decade ago. Among older teens ages 15–17, that figure reaches 98%. Tablet access stands at 70%, and 88% have access to a desktop or laptop computer. Connectivity is no longer confined to a desk in a family room; it travels everywhere a child goes.

The frequency of that access is striking. According to Pew's Teens and Internet Device Access Fact Sheet, 96% of U.S. teens report using the internet every day, and 46% say they are online "almost constantly" — more than double the 24% who said the same a decade ago. One in three teens uses at least one major social media platform almost constantly.

Platform use among teens spans a wide range of apps. YouTube reaches 90% of teens, with 73% using it daily. TikTok and Instagram each reach roughly 60% of teens, and Snapchat is used by 55%. Social media accounts for an average of 1 hour 27 minutes per day among teens, according to Common Sense Media's 2021 Census, though overall entertainment screen time is far higher — averaging 5 hours 33 minutes daily for tweens ages 8–12 and 8 hours 39 minutes daily for teens ages 13–18.

Younger children are also connected. Approximately 25% of children own a personal cellphone by age 8, and the U.S. Surgeon General's 2023 Advisory found that nearly 40% of children ages 8–12 use social media — despite minimum age requirements of 13 on most major platforms. Children are growing up in digital environments their caregivers may not fully understand, and the risks they encounter there are not hypothetical.

The CDC's 2024 NCHS Data Brief on Daily Screen Time Among Teenagers found that 50.4% of U.S. teens ages 12–17 spend four or more hours per day on screens — and those teens were more than twice as likely to experience anxiety (27.1%) and depression (25.9%) symptoms compared to those with less than four hours of daily use (12.3% and 9.5% respectively). Screen time, on its own, is not inherently harmful; but the volume, the content, and the lack of adult engagement create conditions where risk can accumulate.

Online Predators and Grooming

Online grooming is the process by which an adult or older person builds a trusting relationship with a child — often over weeks or months — with the intent to exploit that child sexually. It is not rare. In 2024, NCMEC's CyberTipline received 20.5 million reports of suspected child sexual exploitation, which adjusted to 29.2 million separate incidents when accounting for changes in how reports are bundled. The volume of CSAM reports analyzed by NCMEC has increased 87% since 2019, according to the WeProtect Global Alliance. Since the CyberTipline opened in 1998, NCMEC has responded to more than 226 million reports of child sexual exploitation.

Predators are active across social media, gaming platforms, and messaging apps. The FBI estimates that 500,000 predators are active online every day, with children ages 12–15 most frequently targeted. According to the FBI, over 50% of victims of online sexual exploitation are between 12 and 15 years old, and an estimated 89% of sexual advances directed at children occur in internet chat rooms or through messaging. These are not exclusively interactions with strangers: in a 2025 Thorn survey, one in three victims of sexual extortion knew their abuser in person.

In the United Kingdom, the NSPCC reported more than 7,062 recorded online grooming offences in 2023/24 — the first time that count has exceeded 7,000, and an 89% increase since 2017/18. Forty-eight percent of grooming cases where a platform was identified occurred on Snapchat. Meta-owned platforms (WhatsApp, Facebook, Instagram) accounted for another 30%. Girls represent 81% of grooming victims; primary school-aged children are also being targeted.

The Stages of Grooming

Understanding how grooming works helps caregivers recognize it and helps children identify when something is wrong. Grooming typically follows recognizable stages, though not always in rigid sequence. Predators begin by building trust — observing public posts and social media profiles, identifying children who appear lonely, seeking validation, or going through difficulty, and simulating a "special connection" by expressing shared interests and offering compliments. They may present themselves as peers or as an older friend who "truly understands."

Next comes emotional manipulation: giving special attention, gifts, or in-game currency to create a sense of obligation and affection. Predators then assess risk by asking questions designed to determine how closely parents monitor the child's devices, and they may request that conversations move to a more private or encrypted platform. The isolation phase follows — gradually turning the child against parents and friends by framing the relationship as uniquely special and encouraging secrecy. The predator then uses desensitization, slowly introducing sexual content or conversations to normalize behavior the child might otherwise recognize as inappropriate. Finally, if the relationship has progressed, threats and exploitation may follow — using images or information as leverage for blackmail.

One finding from the NSPCC, citing Internet Watch Foundation data, is especially important for caregivers to understand: over 70% of identified child sexual abuse images in recent years were "self-generated" — meaning children were manipulated into creating the images themselves. This does not reflect consent or complicity on the child's part; it reflects the effectiveness of grooming tactics in making children feel that compliance is the only option.

Sextortion

Sextortion is the use of real or threatened intimate images to coerce a victim — typically demanding money, more images, or sexual acts. It is one of the fastest-growing threats to children online. In 2024, NCMEC received nearly 100 reports of financial sextortion per day; between 2021 and 2023, online enticement reports (the category that encompasses sextortion) increased more than 300%. In 2025, NCMEC received 1.4 million reports of online enticement — a 156% increase from 2024.

Financial sextortion primarily targets teenage boys, ages 14–17. Males represent 91% of financial sextortion victims in the United States, according to NCMEC's 2024 sextortion data release. Since 2021, NCMEC has confirmed that at least 36 teenage boys have died by suicide as a direct result of sextortion victimization. The FBI recorded at least 20 documented suicides among minors linked to financial sextortion in 2022 alone. These are not isolated tragedies — they reflect a coordinated, often transnational criminal enterprise deliberately targeting adolescent boys.

Traditional sextortion — seeking additional explicit content rather than money — more commonly targets girls ages 10–17. A June 2025 Thorn survey of 1,200 young people ages 13–20 found that 1 in 5 (20%) had a lived experience of sextortion; 1 in 4 reported having been a victim of sexual extortion while under age 18. Girls and LGBTQ+ youth were most likely to be threatened with demands for more imagery; boys were most likely to be targeted for money. One in 8 sextortion victims reported that the perpetrator threatened them with a deepfake of them.

How sextortion works: An abuser — often posing as a peer or romantic interest — initiates contact on social media or gaming platforms, establishes rapport, and then requests or manipulates the child into sharing an intimate image. Once obtained, the threat follows immediately: pay money (typically demanded via gift cards or cryptocurrency, per NCMEC) or face exposure to family, friends, and school. Instagram is the most common platform for initial contact in 45% of youth sextortion cases, followed by Snapchat. Offenders commonly move victims to additional encrypted messaging apps after initial contact. The cycle of shame and fear is designed to be paralyzing — which is why the most important thing families can do is establish, in advance, that a child who finds themselves in this situation will not face punishment for coming forward.

 

Cyberbullying

Cyberbullying — harassment, threats, humiliation, or social exclusion carried out through digital platforms — affects a substantial portion of children and adolescents. The CDC's 2024 NCHS Data Brief on Bullying Victimization found that 34% of U.S. teenagers ages 12–17 reported being bullied in the prior 12 months. The Cyberbullying Research Center found that 26.5% of U.S. teens ages 13–17 reported experiencing cyberbullying in the prior 30 days in 2023 — up from 23.2% in 2021, 17.2% in 2019, and 16.7% in 2016. According to Pew Research Center's 2022 Teens and Cyberbullying report, 46% of U.S. teens have experienced at least one form of cyberbullying in their lifetimes.

Who Is Most Affected

LGBTQ+ youth and girls face disproportionate rates of cyberbullying. The CDC Data Brief found that sexual or gender minority teenagers were bullied at 47.1%, compared to 30.0% for non-minority peers. Among LGBTQ+ youth specifically, nearly 30% experienced electronic bullying in the past year — more than twice the rate of heterosexual students at 13%. Girls are also more frequently cyberbullied: 59.2% of female teens reported lifetime cyberbullying compared to 49.5% of males. The CDC's 2023 YRBS found that 21% of female high school students were electronically bullied in 2023, compared to 12% of males.

Common forms of cyberbullying include offensive name-calling (experienced by 32% of teens), false rumors spread online (22%), and being sent explicit images they did not request (17%), per Pew Research. Cyberbullying often occurs across multiple platforms simultaneously and can involve peers, ex-partners, or groups organized specifically to target one individual. The 24/7 nature of digital harassment — reaching children in their bedrooms at night, during school, and in spaces they might otherwise consider safe — distinguishes it meaningfully from in-person bullying.

 

 

Psychological Impact

The mental health consequences of cyberbullying are serious and well-documented. According to ICANotes' 2024 analysis of research literature, cyberbullying victims are more than twice as likely to experience depressive symptoms; 86% of victims report the experience impacted them negatively; 78% say it hurt their self-confidence; and 70% say it affected their self-esteem. Cyberbullying victims were 2.50 times more likely to experience suicidal ideation, 11.5 times more likely to present with suicidal ideation at emergency departments, and 4.2 times more likely to experience suicidality overall compared to non-bullied peers. For LGBTQ+ youth who were electronically bullied, research shows a threefold higher chance of attempting suicide. Among teen victims who missed school days because of cyberbullying, the rate has nearly doubled — from 10.3% in 2016 to 19.2% in 2023.

The CDC's 2024 analysis of YRBS data found that students who used social media frequently were more likely to be electronically bullied, experience persistent sadness or hopelessness, seriously consider attempting suicide, and make a suicide plan — even after controlling for other variables. Frequent social media use does not cause all of these outcomes, but it creates conditions in which the risks compound.

Social Media and Mental Health

In May 2023, U.S. Surgeon General Dr. Vivek Murthy issued an advisory concluding that "while social media may have benefits for some children and adolescents, there areSocial Media and Mental Health keeping children safe online 06 ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents." The Surgeon General's Advisory on Social Media and Youth Mental Health cited evidence that teens who use social media for more than three hours a day face double the risk of depression and anxiety symptoms — a threshold that the average teen already exceeds, as the average is 3.5 hours daily.

Research supporting these conclusions includes a longitudinal cohort study of U.S. adolescents ages 12–15 (n=6,595) showing double risk of depression and anxiety at the three-hour threshold; a natural experiment studying social media rollout across U.S. colleges (n=359,827 observations) linking platform introduction to a 9% increase in depression and 12% increase in anxiety over baseline; and a randomized controlled trial finding that limiting social media to 30 minutes per day for three weeks led to more than a 35% improvement in depression scores among participants with high baseline depression. A separate randomized controlled trial found that deactivating social media for four weeks improved subjective well-being by 25–40% of the effect size of established psychological interventions.

Body Image and Sleep

Body image is a significant dimension of social media's impact on young people. The Surgeon General's Advisory found that 46% of adolescents said social media makes them feel worse about their bodies; nearly half say it causes them to worry about their body image. A 2024 BYU Ballard Brief analysis noted a striking correlation between the age at which children receive their first smartphone — typically 12–13 — and the onset of Body Dysmorphic Disorder at the same age. Teens with distorted body image are twice as likely to attempt or think about suicide as their peers. A systematic review of 42 studies cited by the Surgeon General found consistent relationships between social media use and poor sleep quality, reduced sleep duration, and depression among young people.

The American Psychological Association's 2023 Health Advisory on Social Media in Adolescence — the APA's first-ever advisory of this type — found that risks are greater during early adolescence (ages 10–14) than later, because the brain regions governing attention, peer feedback, and reinforcement are most sensitive during this developmental window. The APA found that 41% of teens with the highest social media use rate their overall mental health as "poor" or "very poor," compared to 23% of the lowest users. APA data shows that 10% of highest social media users expressed suicidal intent or self-harm in the prior 12 months, compared to 5% of lowest users. Nearly 1 in 3 adolescents reports using screens until midnight or later.

Surgeon General Murthy has publicly stated that age 13 is "too early" for children to join social media, and has called for minimum age requirements to be raised. The APA's advisory notes that adult-designed platform features — like buttons, infinite scrolling, push notifications, and personalized recommendations — are "inappropriate for children" and that technology companies "have a commercial interest in keeping users engaged" that is misaligned with children's wellbeing.

Exposure to Harmful Content

Pornography

Children are encountering sexually explicit content at young ages, and increasingly through social media rather than dedicated pornography sites. A 2023 report by the UK Children's Commissioner found the average age at which children first see pornography is 13; 10% had seen it by age 9, 27% by age 11, and 50% by age 13. Among those who had seen it, 79% encountered violent pornography before age 18. A 2023 U.S. study found the average age of first pornography exposure to be around age 12, with 15% of children first seeing it at age 10 or younger. A 2025 UK survey found that 59% of minors were exposed to pornography accidentally via social media — a significant increase from 38% in 2023 — with X (formerly Twitter) as the leading platform, followed by Snapchat, Instagram, and TikTok.

Violent Content and Self-Harm

A 2024 Youth Endowment Fund survey of 10,000 children in England and Wales found that 70% of teens ages 13–17 had encountered real-life violent content online in the past year. Notably, 25% of that content was pushed to them by platform algorithms — not sought out. Only 6% actively searched for violent content. Types included fights between young people (56%), weapons content (35%), gang activity (33%), and sexually violent threats (27%). Among teens who saw weapons online, 80% said it made them feel less safe, and 39% said it made them more likely to carry a weapon themselves.

Self-harm and suicide content presents its own documented risks. A 2025 study published in JAACAP Open found that 50% of adolescents reported seeing self-harm content on social media over an 8-week monitoring period. Weeks in which adolescents were exposed to self-harm content were associated with being 8.60 times more likely to engage in non-suicidal self-injury that same week. An earlier cross-sectional study of inpatient adolescents found that 87% reported exposure to self-harm content on social media before their first self-harm episode, with exposed individuals twice as likely to have suicidal ideation and three times more likely to have engaged in self-harm. The CDC identified suicide as the second leading cause of death among individuals ages 10–24 in 2021. Bark monitoring data has found that 33% of tweens and 57% of teens were involved in a self-harm or suicidal situation captured through their devices.

Online Gaming Risks

Online Gaming Risks keeping children safe online 08Online gaming is one of the most significant — and underestimated — vectors for child exploitation. Predators use gaming environments because access is easy, children's guards are lower when they believe they're interacting with fellow players, and voice chat creates an immediacy that text-based platforms don't. According to the WeProtect Global Alliance's 2023 Global Threat Assessment, a high-risk grooming situation can develop in a social gaming environment in as little as 19 seconds, with an average of 45 minutes in other online contexts. Pew Research has found that 97% of teenage boys and 83% of teenage girls play video games, providing predators with enormous access to young people.

The FBI's Internet Crime Complaint Center has warned specifically about predators operating through gaming platforms. Predators exploit in-game chat features — voice, text, and lobby chats — to begin conversations that appear to be between peers. They offer game tips, in-game currency, or items to build rapport. They then move conversations off gaming platforms to less monitored apps — a process called "off-platforming" — targeting Discord, WhatsApp, Snapchat, and TikTok. Voice-masking technology, available in some gaming environments, may allow adults to sound younger. High-risk gaming platforms for grooming include Roblox, Fortnite, Among Us, Minecraft, and Xbox/PlayStation chat, with Discord frequently used as a follow-up communication channel.

Gaming addiction is a related concern. The WHO Regional Office for Europe's 2024 data found 12% of adolescents are at risk of problematic gaming; boys are at higher risk (16%) than girls (7%). Globally, a 2021 systematic review estimated gaming disorder prevalence at 3.05% of the general population, but at 6.6% among children and adolescents specifically. Signs of problematic gaming include abandoning other activities, deteriorating school performance, disrupted sleep, irritability when unable to play, and withdrawal from offline relationships.

 

 

 

Human Trafficking and Online Recruitment

Online platforms have fundamentally changed how sex trafficking recruitment operates. In 2021, 41% of sex trafficking survivors in U.S. federal cases were recruited by theirHuman Trafficking and Online Recruitment keeping children safe online 09 trafficker on social media — up from 30% between 2000 and 2020, according to the Human Trafficking Institute. The 2023 Federal Human Trafficking Report identified Snapchat, Facebook, and Instagram as the top three platforms used to recruit sex trafficking victims. A Thorn study found that 55% of domestic minor sex trafficking survivors trafficked in 2015 or later met their trafficker for the first time through online channels, and 63% of traffickers used online methods to build trust with their victims.

Trafficking recruitment online relies on strategies that closely mirror grooming. The United Nations Office on Drugs and Crime (UNODC) identifies two primary approaches: hunting (proactively targeting specific vulnerable children based on visible indicators such as financial need, emotional distress, or isolation) and fishing (posting deceptive advertisements for modeling work, jobs, or relationships that attract self-selected victims). Common lures include romantic relationships, false job offers, gifts and money that create debt bondage, and normalization within online peer communities. According to the Safe House Project, traffickers use Instagram, Snapchat, TikTok, Facebook Messenger, Discord, WhatsApp, Kik, and online games as primary recruitment platforms.

NCMEC received 26,823 child sex trafficking reports in 2024 — a 55% increase from 2023, partly attributable to expanded mandatory reporting requirements under the REPORT Act enacted that year. In 2025, child sex trafficking reports to NCMEC surged to 105,877, a figure primarily attributed to continued expansion of mandatory reporting rather than a sudden spike in trafficking activity — though the underlying problem remains severe.

 

 

 

Artificial Intelligence: A New Frontier of Risk

Artificial Intelligence A New Frontier of Risk keeping children safe online 10Generative artificial intelligence has introduced threats to child safety that did not exist five years ago. The most alarming is AI-generated child sexual abuse material (AIG-CSAM): sexually explicit images and videos of children that are synthesized by AI rather than captured in real abuse. AIG-CSAM is illegal under existing U.S. federal law, which prohibits obscene content involving minors regardless of whether it is computer-generated. But its proliferation is accelerating rapidly.

In 2024, NCMEC's CyberTipline received 67,000 reports involving generative AI — a 1,325% increase from approximately 4,700 in 2023. The Internet Watch Foundation (IWF) detected 3,440 AI-generated videos of child sexual abuse in 2025 — up from just 13 the prior year, a 26,362% increase — with more than half classified at the most severe level. The IWF discovered over 20,000 AI-generated child abuse images on a single dark web forum within a single month. A 2024 Thorn survey found that 1 in 10 minors in the U.S. reported knowing peers who had used AI tools to generate sexually explicit images of other minors. One in 17 teens says they have been a target of AI-generated deepfakes.

AI-powered "nudify" tools are widely and freely available online. These tools can digitally remove clothing from photographs — including ordinary school or social media photos of children — in seconds. A 2023 New Jersey case drew national attention when a teenager used a commercially available undressing AI site to create more than 30 nude images of female classmates and share them in group chats. In 2024, advertisements for nudify tools appeared on mainstream platforms. Deepfakes — AI-synthesized video or images of real people — are also being used to impersonate children in order to infiltrate their peer networks.

AI is also being weaponized in grooming and sextortion. Generative AI creates realistic grooming scripts that predators use to manipulate victims at scale. AI can simulate explicit chat conversations with children, enabling mass-volume exploitation attempts. Deepfakes of children can be fabricated from publicly available photos and then used as threats — giving predators leverage even when no real intimate image was ever shared. AI chatbots, designed to maximize engagement rather than user safety, have been documented engaging in sexually explicit conversations and role-play with minors, and have been found to encourage self-harm and reinforce suicidal ideation. Unlike human caregivers, chatbots have no duty of care.

Encrypted and Disappearing Messages

End-to-end encryption (E2EE) is an important privacy and security technology — it protects journalists, activists, domestic violence survivors, and people living underprotecting children online 03 authoritarian governments. It also creates conditions in which child sexual exploitation is substantially harder to detect and report. When platforms use E2EE, operators cannot scan message content, which means that PhotoDNA and perceptual hash-scanning technology — the primary tools used to identify known child sexual abuse material circulating on platforms — cannot function.

The real-world consequences of this trade-off are measurable. When the EU issued a directive in 2020 temporarily restricting voluntary CSAM scanning of private messages, NCMEC found that EU-related reports from electronic service providers decreased by 51% in the first six weeks. More recently, Meta's rollout of end-to-end encryption on Facebook Messenger in 2024 was the primary driver of a 7 million decline in NCMEC CyberTipline reports compared to 2023 — despite new mandatory reporting requirements under the REPORT Act that should have increased reports. Meta filed 6.9 million fewer reports than in 2023. NCMEC has expressed serious concern that this decline does not reflect fewer incidents of abuse, but fewer detections.

Disappearing message features — popularized by Snapchat and used on many other platforms — present related risks for children. These features create a false sense that content shared will vanish permanently. Children are more likely to share images or engage in conversations they would otherwise avoid, believing there will be no lasting record. In reality, screenshots remain possible on most platforms, notifications of screenshots are not guaranteed, and content can be preserved and later used for blackmail. The NSPCC found that 48% of online grooming cases where a platform was identified involved Snapchat. Telegram and Kik present additional concerns: Kik does not track message content or users' phone numbers, making it extremely difficult for law enforcement to obtain information; Telegram's permissive approach to large file transfers has made it a platform of choice for CSAM distribution.

Data Privacy and Children's Digital Footprints

Children's data is collected, sold, and used in ways that most families are unaware of. By the time a child is 13 years old, online advertising firms have collected an average of 72 million data points about that individual, according to a Washington Post analysis. This includes browsing behavior, location, device identifiers, interests, and social connections — all pieced together across apps, games, and websites the child has used.

The Children's Online Privacy Protection Act (COPPA), enacted in 1998 and last updated in 2013, requires websites and apps directed to children under 13 to obtain verifiable parental consent before collecting personal data. In January 2025, the FTC voted 5-0 to approve significant updates to the COPPA Rule — the first major update in more than a decade — requiring separate parental consent for third-party targeted advertising, mandatory data minimization policies, expanded parental notice requirements, and enhanced data security standards. Civil penalties for COPPA violations can reach $53,088 per violation.

Despite these rules, enforcement gaps are significant. Research has found that 67% of popular free children's apps collect and share identifying information without proper parental consent. The average children's app contains seven third-party software development kits that collect and transmit data. Fifteen percent of children's apps collect geolocation data that could reveal a child's physical location. In educational settings, the picture is especially concerning: a 2022 study by Internet Safety Labs found that up to 96% of apps used in U.S. schools share student information with third parties, and 78% share this data with advertisers and data brokers. A Human Rights Watch global analysis of 164 educational technology products found that 89% posed risks to children's privacy through embedded advertising trackers.

A Guide for Parents: Keeping Children Safe Online

The research on what actually protects children online points consistently toward one finding: monitoring tools work best when they are combined with open communication, gradual autonomy, and ongoing digital literacy education. No app or parental control can substitute for a child who trusts that they can come to a parent without fear of punishment. The following guidance draws on recommendations from the American Academy of Pediatrics, the U.S. Surgeon General, NCMEC, the Cyberbullying Research Center, Common Sense Media, and peer-reviewed research.

protecting children online 04Age-Appropriate Conversations About Online Safety

Online safety conversations should begin early — as young as age two or three — and evolve as the child grows. At every stage, the most important message is: you will not be in trouble if you come to me.

Young Children (Ages 2–9). At this stage, focus on awareness rather than restriction. Teach simple rules: some content online is not for children; your name, address, and school are private information we keep to ourselves; always ask an adult before clicking on something new. Use storytelling and play to explain concepts rather than technical language. Co-view content together as a family activity. Role-model safe internet habits — let children see you putting your phone away and being selective about what you click. Sources: Raising Children Network; Child Rescue Coalition.

Tweens (Ages 10–13). Tweens are approaching or entering social media age, experiencing peer pressure, and spending more time online independently. Conversations should expand to cover cyberbullying, online strangers, and the difference between real friends and online contacts. Establish that only devices in common areas of the house — not bedrooms or bathrooms — are used during certain hours. Review privacy settings together. Discuss what constitutes appropriate behavior in group chats and gaming communities. Conversation starters include: "Who are your followers? Do you know them in real life?" and "Let's look at your privacy settings together."

Teens (Ages 14–18). For teenagers, shift from rule enforcement to collaborative mentorship. Research from the American Academy of Pediatrics shows that teens who experience autonomy-supportive monitoring are more likely to share what they do online; teens who experience controlling monitoring are more likely to hide it. Make a no-punishment pact for online problems: "If something goes wrong online, I promise not to take your phone away if you come to me." Cover: the permanence of digital posts; how grooming actually begins (with flattery and common interests, not inappropriate requests); and sextortion — specifically, that if anyone threatens them with images, they should come forward without deleting anything. Secrecy is always a red flag; gifts and in-game currency from online contacts are manipulation tactics. Teens have the right to ignore, block, or refuse contact from anyone, at any time, without explanation.

Setting Boundaries and Rules

Structural rules reduce risk by creating physical and temporal limits on device use. The American Academy of Pediatrics recommends that children under 18 months avoidThe Digital Landscape How Children Use the Internet Today keeping children safe online 01 screens entirely (except video chatting); that children 18–24 months access educational programming only, with a caregiver present; and that children ages 2–5 use screens under one hour on weekdays and up to three hours on weekends for non-educational content. For children ages 6 and older, the AAP focuses on balance with sleep, physical activity, family time, and free play rather than strict time caps.

Device-free zones and times are strongly recommended by experts and supported by research. These include: bedrooms and bathrooms as phone-free zones at all times; all devices charging overnight in a central location (the kitchen, not the bedroom); screens off at a set evening hour ("digital sunset"); screen-free mealtimes; and all device use in common, visible areas of the home. A Family Media Agreement — ideally co-created with the child — should specify which apps and websites are approved, screen time limits by day of week, approved hours to be online, password-sharing (parents know all account passwords; children do not share passwords with friends), privacy settings requirements (all accounts private; no location sharing), contact policies (only communicate online with people known in real life), an open-door device policy (devices may be spot-checked for safety, not punishment), and a clear reporting expectation.

Recognizing Warning Signs

The following behavioral changes may indicate a child is experiencing grooming, predatory contact, cyberbullying, or exploitation. Many of these behaviors can also reflect typical adolescent development; it is unexplained combinations or sudden changes that warrant attention and gentle conversation.

Potential signs of online grooming or predatory contact: sudden withdrawal, moodiness, or irritability after being online; becoming secretive about online activities or closing screens quickly when adults approach; receiving unexplained gifts, money, gift cards, or in-game currency from contacts the child is reluctant to discuss; communicating with older contacts on platforms like Discord or Roblox; switching apps or asking that chats be deleted; using new apps or anonymous platforms that weren't previously part of their routine; developing intense interest in a new online contact; or using language or demonstrating knowledge of sexual content inappropriate for their age.

Potential signs of cyberbullying: stopping device use without explanation; appearing nervous or distressed after receiving messages; reluctance to attend school or go outside; unexplained headaches, stomachaches, or changes in sleep and appetite; loss of interest in previously enjoyed activities; sudden withdrawal from family and friends; or making statements about suicide or self-harm — which should always be treated as an immediate concern warranting professional attention.

Parental Monitoring Tools

protecting children online 02Parental monitoring applications can provide meaningful protection, particularly when combined with open communication. No tool is a complete solution, and all have limitations.

Bark is best suited for tweens and teens ages 10 and older. It uses AI to scan texts, emails, and more than 30 social media platforms (Instagram, Snapchat, TikTok, Discord, and others) for signs of cyberbullying, self-harm, sexual content, drug references, and stranger contact. Unlike comprehensive monitoring apps, it sends targeted alerts only when something concerning is detected rather than showing parents every message — preserving some teen privacy while flagging serious risks. It offers basic scheduling and web filtering. Pricing is approximately $14/month or $99/year.

Qustodio is strongest for younger children ages 5–16. It provides comprehensive daily device management: screen time limits with custom schedules, per-app time limits and app blocking, categorized web filtering, real-time location tracking and history, and detailed usage reports. It has limited social media monitoring capability compared to Bark. Premium plans start around $54.95/year for one device.

Net Nanny focuses on web filtering and safe browsing, making it most appropriate for younger children ages 4–12. It scans pages as they load and blocks inappropriate content in real time, enforces SafeSearch, and allows custom keyword filtering. It does not monitor social media or messaging platforms. Plans start around $39/year.

Google Family Link is free and built into Android devices and Google Chromebooks. It allows parents to set screen time limits, block or approve apps, control content filters on Chrome and YouTube, approve app downloads, and track location. It is required for children under 13 with a Google account. Most effective through age 13.

Apple Screen Time is free and built into all Apple devices. It provides downtime schedules, per-app time limits, content and privacy restrictions, communication limits (controlling who can contact the child), and screen time reports. Effective at all ages; most powerful for younger children. Best used alongside ongoing conversations for teens.

Platform-Specific Safety Settings

Instagram. Accounts for users under 16 are set to private by default, with unknown users over 18 blocked from sending direct messages. Parents can set up Family SupervisionThe Stages of Grooming keeping children safe online 03 through Instagram's Family Center (Settings → Family Center), which allows daily time limits, content filters, and monitoring of who the teen follows and who follows them. Limitation: teens can create new accounts with false birthdates to bypass protections.

TikTok. Family Pairing (Profile → Settings → Family Pairing) allows parents to set daily screen time limits, create Time Away blocks, see the teen's following and follower list, enable Restricted Mode, and control Direct Messages. As of March 2025, parents can also see accounts the teen has blocked and be alerted when they report content.

YouTube. YouTube Kids is designed for children under 13 with curated content, no comments, and no live streams. For older children, Supervised Accounts can be set up through Google Family Link, with content levels from age-appropriate (Explore, for ages 9+) to broader (Most of YouTube). As of January 2026, parents can set Shorts feed time limits — including to zero — and create supervised accounts without requiring the child to have a Google account.

Roblox. Parents can manage chat settings, content maturity ratings, friend request permissions, monthly Robux spending caps, and account restrictions through the Parental Controls section in Settings. Activity monitoring provides real-time visibility into playtime, purchases, and friend requests. Setting the birth year correctly on the child's account is essential, as controls are automatically applied based on age.

Fortnite / Epic Games. Children under 13 who indicate their age receive Cabined Accounts with no voice or text chat, no real-money purchases, and no social media linking until parental consent is provided. Parental Controls (epicgames.com → Settings → Parental Controls) allow customization of social settings, spending limits, content rating restrictions, and playtime reminders.

Discord. Discord's Family Center requires the teen's consent and involves both accounts linking via QR code. Parents receive weekly activity summaries — friends added, servers joined, messaging frequency — but cannot see message content. Separately, on the child's account, parents should set Safe Direct Messaging to "Keep me safe" (Privacy and Safety settings), turn off "Allow Direct Messages from Server Members," and restrict Friend Requests. Both parent and teen can disconnect the Family Center link at any time.

Snapchat. Snapchat's Family Center requires a parent account and mutual friending before setup. Once linked, parents can see the teen's friends list and recent contacts, set content restrictions, disable access to Snap's My AI chatbot, and report concerning accounts on the teen's behalf. Parents cannot see message content. Snapchat does not allow parents to set time limits remotely — use Apple Screen Time or Google Family Link for that.

Evidence-Based Strategies That Work

A 2026 systematic review and meta-analysis of 11 digital safety studies published in JMIR Pediatrics and Parenting found that parental digital safety interventions significantly improved parents' digital safety knowledge and skills, and were associated with meaningful reductions in children's screen time. The combination of communication-focused approaches with monitoring tools consistently outperformed monitoring alone.

The AAP summarizes the research: "Monitoring that grants children increasing opportunities for autonomy or independence over their decisions and behaviors has more positive outcomes than controlling, restrictive media monitoring, particularly as youth get older." Among high-frequency social media users with high parental monitoring, 25% rated their mental health as poor or very poor; among high-frequency users with low parental monitoring, that figure rose to 60%. Among high-frequency users with strong parental relationships, only 2% reported suicidal thoughts or self-harm — compared to 22% among those with poor parental relationships.

Digital literacy education also matters. A study in Heliyon found that children with higher digital literacy demonstrated better self-regulation online, and that parental mediation was more effective when combined with digital literacy education than either approach alone. Children who understand how algorithms work, how grooming operates, and why predators offer gifts are meaningfully better equipped to recognize and resist those tactics. These conversations don't require technical expertise — they require honesty, consistency, and a home environment in which children believe they can ask for help.

How to Report Online Exploitation

If a child is in immediate danger, call 911. For online exploitation and abuse, multiple reporting channels exist and should be used in combination. Before reporting, take screenshots of messages, profiles, photos, and any relevant content — do not delete communications, as evidence is critical to investigations.

  • NCMEC CyberTipline — The nation's centralized reporting system for online child sexual exploitation, operated by the National Center for Missing & Exploited Children. Report at report.cybertip.org or call 1-800-843-5678, available 24 hours a day, 7 days a week. Reports are reviewed and forwarded to the appropriate law enforcement agency. What can be reported: online enticement, child sexual abuse material (CSAM), child sex trafficking, sextortion of a minor, online grooming, and unsolicited obscene materials sent to a child.
  • FBI Internet Crime Complaint Center (IC3) — Report internet-based crimes at ic3.gov. IC3 notes that crimes against children should be filed with NCMEC; IC3 is most appropriate for crimes overlapping with cybercrime more broadly, such as financial extortion involving a minor.
  • DHS Know2Protect Tipline — The Department of Homeland Security's child exploitation tipline. Call 1-833-591-KNOW (5669). All reports are reviewed and forwarded to appropriate law enforcement. More information at know2protect.gov.
  • Platform reporting tools — Every major social media platform has a built-in mechanism to report harmful content. Take screenshots first, then use the "…" (three dots), share icon, or flag icon near the content to access the Report feature. Select the appropriate category (sexual exploitation, bullying, harassment) and include a description. Follow up if content is not removed promptly.
  • NCMEC Take It Down — A free service for minors (or adults victimized as minors) whose explicit images have been shared or threatened online. Visit takeitdown.ncmec.org. The tool generates a unique digital hash (fingerprint) of the image without uploading it; the hash is shared with participating platforms, which can use it to scan for and remove matching content. Under the TAKE IT DOWN Act (signed into law in 2025), participating platforms must remove such content within 48 hours of a verified request. The service is anonymous and free.

Legal Protections and Emerging Legislation

Federal and state law provides a framework for protecting children online, though significant gaps remain — and the pace of legislative development has accelerated in recent years in response to growing public concern.

COPPA (Children's Online Privacy Protection Act), enacted in 1998, requires websites and apps directed to children under 13 to obtain verifiable parental consent before collecting personal data, provide clear privacy policies, and allow parents to review or delete their child's data. In January 2025, the FTC approved significant updates to the COPPA Rule for the first time since 2013, including new requirements for consent around targeted advertising and mandatory data minimization.

COPPA 2.0, reintroduced in 2025 by Senators Markey and Cassidy, would extend COPPA protections to children under 17, ban targeted advertising to minors, create an "eraser button" for data deletion, and establish a Youth Marketing and Privacy Division at the FTC. The Senate passed a combined COPPA 2.0/KOSA bill 91–3 in July 2024, but it did not advance in the House before the 118th Congress ended. It was reintroduced in the 119th Congress in 2025 and remains pending.

The Kids Online Safety Act (KOSA), first introduced in 2022 and reintroduced in 2025, would require covered platforms to prevent and mitigate specific harms to known minors — including threats of violence, sexual exploitation, and features that cause compulsive usage. It would require annual independent audits and allow state attorneys general to bring civil actions for violations. KOSA has not yet been enacted at the federal level.

The TAKE IT DOWN Act was signed into law in May 2025. It criminalizes the creation of nonconsensual deepfakes at the federal level, with penalties of up to 2.5 years in prison for sextortion of a minor, and requires platforms to remove non-consensual intimate imagery — including AI-generated content — within 48 hours of a verified request. As of 2025, 45 U.S. states have enacted their own laws addressing intimate AI deepfakes, many specifically addressing minors. AI-generated child sexual abuse material is already illegal under existing federal law prohibiting obscene content depicting minors, regardless of whether it is computer-generated.

At the state level, all 50 states have some form of anti-bullying law, many including cyberbullying provisions. Multiple states — including Connecticut, Texas, New York, and Maryland — enacted children's data privacy laws in 2023–2024. The Internet Crimes Against Children (ICAC) Task Force Program, funded by the Department of Justice at $39.9 million in FY 2024, operates a national network of 61 coordinated task forces representing more than 5,400 federal, state, and local law enforcement agencies. In FY 2024, ICAC task forces conducted approximately 203,467 investigations and led to more than 12,600 arrests.

Organizations and Resources for Families

  • National Center for Missing & Exploited Children (NCMEC) — Operates the CyberTipline, the Take It Down service, the NetSmartz online safety education program, and Team HOPE peer support for families affected by exploitation. missingkids.org
  • Thorn — Builds technology to detect and remove CSAM; provides free parent resources and current research on online child exploitation. thorn.org
  • Common Sense Media — Provides age-by-age media reviews, a K–12 Digital Citizenship Curriculum, and family guides on apps, games, and social media. commonsensemedia.org
  • ConnectSafely — Publishes platform-specific parent guides and privacy guides for safe use of social media and apps. connectsafely.org
  • Internet Crimes Against Children (ICAC) Task Force Program — Find your state's task force to connect with local law enforcement resources. icactaskforce.org
  • StopBullying.gov — Federal government resource with guides on cyberbullying prevention, recognition, and reporting. stopbullying.gov
  • RAINN (Rape, Abuse & Incest National Network) — Provides resources on reporting tech-enabled sexual abuse; operates the National Sexual Assault Hotline at 1-800-656-HOPE (4673). rainn.org
  • Family Online Safety Institute (FOSI) — International nonprofit working on balanced online safety policy; operates Good Digital Parenting resources for families. fosi.org
  • iKeepSafe — Provides digital citizenship resources for educators and parents, including the "Be Internet Awesome" program in partnership with Google. ikeepsafe.org
  • NetSmartz (NCMEC) — Age-appropriate videos, activities, and lessons for children, parents, and educators on online safety. missingkids.org/netsmartz
  • DHS Know2Protect — Department of Homeland Security campaign against online child sexual exploitation, with tipline and educational resources. know2protect.gov
  • Childhelp — Operates the National Child Abuse Hotline (1-800-422-4453), staffed 24/7 by professional crisis counselors in up to 240 languages. childhelp.org
  • Child Welfare Information Gateway — Connects families to local support services. Phone: 1-800-394-3366. childwelfare.gov
  • Internet Safety 101 (Enough Is Enough) — Video-based curriculum and guides for parents, teachers, and guardians. internetsafety101.org

If You Are Concerned About a Child

If you are concerned that a child may be experiencing abuse or neglect, please do not wait. You do not need proof — you only need reasonable concern. Reports made in good faith are protected by law.

  • If a child is in immediate danger, call 911.
  • Maryland Child Protective Services Hotline: 1-800-91Prevent (1-800-917-7383), available 24 hours a day, 7 days a week.
  • Outside of Maryland — Childhelp National Child Abuse Hotline: Call or text 1-800-422-4453 (1-800-4-A-CHILD), available 24 hours a day, 7 days a week, in up to 240 languages.
  • 988 Suicide and Crisis Lifeline: Call or text 988 if a child or anyone in your life is expressing thoughts of suicide or self-harm.

Reporting abuse can protect a child. Remember, you do not need to be certain that abuse is occurring — if you have concerns, reach out. Trained professionals will assess the situation and take appropriate steps.

If you are a survivor of childhood abuse and are struggling with its effects, support is available. Healing is possible, and you deserve to access it.

Sources and Resources