The number of pieces of content depicting graphic violence that Facebook took action on during the first quarter of this year was up 183% on the previous quarter.

The social network took action on 3.4m posts or parts of posts that contained such content.

The company said most of the increase was the result of improvements in detection technology.

Facebook says 0.22-0.27% of views by users were of content that violated its standards around graphic violence in the period.

This was up from 0.16-0.19% in the previous three months.

The company said the increase was likely the result of higher volumes of graphic violence content being shared on Facebook, possibly due to an increase in violence in Syria in the period.

The bulk of the posts were found and flagged by the firm before users reported it to Facebook, driven by improvements in artificial intelligence technology.

The figures are contained in an updated transparency report published by the company, which for the first time contains data around content that breaches Facebook's community standards.

In the area of adult nudity and sexual activity, between 0.07-0.09% of views during the first quarter were of content that violated standards.

That is up from 0.06-0.08% during the last three months of 2017.

The number of pieces of nude and sexual content that the company took action on during the period was 21 million, the same as during the final quarter of last year.

The social network estimates that it found and flagged 85% of that content prior to users seeing and reporting it, a higher level than previously due to technological advances.

Facebook said the number of views of terrorist propaganda from organisations including ISIS, al-Qaeda and their affiliates that happen on the platform is extremely low.

This, the company said, was because there was little of it in the first place and because most is removed before it is seen.

Around 1.9m pieces of terror related content had action taken on them during the first three months of the year.

This was up by three quarters from 1.1m during the previous quarter because of improvements in Facebook's ability to find such content using photo detection technology.

This led to old as well as new content of this type being taken down.

Most of the content was found and flagged before users had a chance to spot it and alert the platform.

About 2.5m pieces of hate speech had action taken on them during the period, an increase of more than half on the previous period, driven by technology improvements.

However, Facebook's ability to find this hate speech before users had reported it was not as good as other categories, with the company picking up only 38%.

"Hate speech content often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards, so we tend to find and flag less of it, and rely more on user reports, than with some other violation types," the report says.

837 million pieces of Spam were detected and removed during the first quarter, up 15% on the previous period, while 583 million fake accounts were disabled, a reduction of 16%.

Facebook estimates that 3-4% of monthly active users during the last three months of 2017 and first three months of 2018 were fake.

The company previously enforced community standards by having users report violations and trained staff then deal with them.

Now, however, artificial intelligence technology does much of that work.

This means often the technology can identify breaches before anyone actually sees it, Facebook says.

It also means content in private groups, which will never be reported by members of the group, can be flagged and dealt with.

The social network says when action is taken on flagged content it does not necessarily mean it has been taken down.