Leaked documents claim that social networking giant Facebook instructs its moderators only to remove certain threats of violence, but not some references to child abuse.
Facebook has come under increased pressure in recent months over its influence on almost two billion active users and the control it has over the content that appears on the platform.
According to files published by the Guardian newspaper, Facebook does not automatically delete evidence of non-sexual child abuse in order to help identify and rescue the child involved.
The leaked dossier also claimed comments posted about killing Donald Trump are banned by the social networking site, although violent threats against other people are often allowed to remain.
It shows "credible violence" such as posting the phrase "someone shoot Trump" must be removed by the staff because he is a head of state.
However, generic posts stating someone should die are permitted as they are not regarded as credible threats, the Guardian reported.
Facebook will also allow people to live-stream attempts to self-harm because it "doesn't want to censor or punish people in distress", it added.
Facebook has previously come under fire for allegedly failing to remove sexualised pictures of children from its website after the BBC said it used Facebook's "report button" to flag up 100 photos on the website, but 82 were not removed.
Monika Bickert, head of global policy management at Facebook, said: "Keeping people on Facebook safe is the most important thing we do.
"Founder Mark Zuckerberg recently announced that over the next year, we'll be adding 3,000 people to our community operations team around the world - on top of the 4,500 we have today - to review the millions of reports we get every week, and improve the process for doing it quickly.
"In addition to investing in more people, we're also building better tools to keep our community safe.
"We're going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help."
Last month, a US man killed an elderly man in Ohio before posting a video of the murder on Facebook.
The company said at the time that it would review how it monitors violent footage and other objectionable material in response to the killing.
The shooting video was visible on Facebook for nearly two hours before it was reported, the company said.
Also in April, a man in Thailand filmed himself killing his 11-month old daughter and then taking his own life on Facebook Live.
The footage was accessible on the site for around 24 hours after the deaths.
Facebook chief Mark Zuckerberg has vowed to work to keep the social network from being used to propagate harrowing acts like murder and suicide.