What constitutes an excellent film? lyrics

by

Serial Podcast


What constitutes an excellent film?

For the past century or so, cinema has been one of society's most frequently consumed types of entertainment, with a plethora of films popping onto the scene each year with the fervour of popcorn kernels in a hot pan
The ever-expanding range of cinematic options has made the evening entertainment choosing process even more perplexing, with many viewers relying on recommendations to help them make their decisions. Such recommendations could originate from user-generated websites, critics' reviews, or systems that blend the two

But does the esteemed film critic's perspective reflect that of the average moviegoer? Can we rely on the critic's opinion to affect our entertainment choices?

The purpose of this article is to examine the similarities and contrasts in the movie rating behaviour of film reviewers and audience members using data from Rotten Tomatoes, a website that has provided film and television reviews since 1998

The website displays two unique average ratings (on a % scale) for each movie, each representing the percentage of good reviews it received: one based on reviews from a limited group of critics, and another produced by the site's users

This article will apply a number of analytical tools to examine the behaviour of the two groups, using a data set consisted of the average critic rating, average audience rating, and a range of features from Rotten Tomatoes

We will begin by making a broad assessment of the magnitude of the variations in the rating behaviour of the two groups. We will next delve deeper into an examination of the causes of these discrepancies. Finally, we will use a basic linear model to try to discover the primary predictors of each group's rating patterns

Part I: Examining Distinctions

To draw a broad comparison of the respective rating behaviour of the audience and critics, consider the distribution of each group's ratings



Figure 1: Histogram of audience and reviewer ratings dispersion

In the accompanying histogram, we can see distinct patterns in the ratings of the two groups

Both distributions have a left skew, indicating that there are more movies in our data set clustered towards the higher (and thus more positive) end of the rating range

However, it is also true that the shapes of the two distributions are noticeably different

The audience ratings are more evenly spread, with no obvious peaks, and are often centred in the mid to high rating range, with only a handful dropping towards the lower end of the spectrum

The critics' scores are significantly more evenly distributed across the entire range, implying that there are almost as many movies at the bottom as there are at the top. Peaks can also be found at both ends of the distribution, implying that these are situations of movies with a small number of reviews

This means that the audience will be more liberal with their ratings, whilst reviewers would be more, as the name says, critical

Next, let's see if there's any evident association between the evaluations of the two groups. We can plot the audience vs. critic ratings for each point as an individual movie



Figure 2: Scatter plot shows the relationship between audience and critic ratings

According to the scatter plot above, there is some indication of a positive connection between the two groups' scores, implying that a film scored highly by the audience will also obtain a positive rating from the critics

The correlation shown in the plot, on the other hand, is rarely strong, with a huge number of movies falling a significant distance away from the central line

Both of the preceding figures demonstrate that there is enough of a difference in the ratings of the two groups for us to delve deeper into the data

With that in mind, let's do some further digging to see what might be causing the aforementioned variances in behaviour between the two groups

Part II: Examining the Causes of Audience/Critical Disparities

Taking the first analysis above into account, let's try to take some measures to determine the drivers of the audience's and critics' differing behaviour by analysing additional components of the data set

When considering what factors can influence a viewer's enjoyment of a film, one that comes to mind is its genre. Let's examine the average rating for each genre of movie in our data set for each of the two groups



Figure 3: A bar chart comparing the ratings of audiences and critics across genres

The graph above shows how critics and audiences reacted differently to various movie genres

Doc*mentary and Classic films received higher marks from reviewers, while Faith & Spirituality and Kids & Family films received higher marks from audiences

From a theoretical standpoint, this makes sense. One could argue that the film world's "Classics" meet the required criteria to be deemed of great cinematic quality, despite being less accessible to the more casual moviegoer

“Family friendly” films, on the other hand, may delight audiences looking for a simple movie to watch with their children without applying the different cinematic methods required to acquire the honour of movie brilliance

However, there were some genres to which both groups responded equally. Musicals and dramas, for example, appear to be uniformly average in terms of popularity among audiences and reviewers alike

Let's look at the year in which movies were released in a similar way. Could it be that the ratings of the two groups vary in a similar, or perhaps contrasting, way over time?



Figure 4: A line graph comparing the ratings of audiences and critics by release year

The graph above reveals that the average rating of the audience has remained pretty stable over time, though with a little decreasing tendency. However, such a falling trend can be seen to a far greater extent in the behaviour of the critics, with their line exhibiting a steep fall over the course of the 100 years covered

Is this to say that there are less acclaimed films around these days than there were in the early stages of the previous century?

Not always, of course. It's interesting noting the third line on the graph, which illustrates that as we move further back in time, the number of movies we're analysing grows

This could be because the producer of this data collection preferred to include more new movies than old ones, or because data for more recent years was more readily available

However, it is more likely that this is due to the fact that there are considerably more movies created each year today than there were, say, 60 years ago

With the development of concepts such as independent cinema and the acceptance of movie streaming services, it is reasonable to infer that the barriers to entrance into the film business have greatly lessened

This indicates that film distribution has likely got more "diluted" over time, with a greater range of films made each year to cater to a larger audience

While this does not imply that quality cinema has vanished, an increase in the number of films designed to appease viewers looking for a quick fix of entertainment rather than impress critics inevitably leads to a decrease in the average of the critics' ratings without significantly harming the general opinion of the audience, and thus potentially explains the above-mentioned behaviour

Part III: Identifying the Factors Influencing Rating Behavior

Let us now proceed to the final section of the analysis, in which we will attempt to uncover the primary determinants of each group's rating behaviour using a simple linear model

We can utilise the ages, runtimes, content ratings, and genres of movies as explanatory factors and the average ratings of reviewers and audiences as response variables in two independent models that we can then compare

Our web scraping Services provides high-quality structured data to improve business outcomes and enable intelligent decision making
Our Web scraping service allows you to scrape data from any websites and transfer web pages into an easy-to-use format such as Excel, CSV, JSON and many others

Figure 5: The linear models' coefficients for audience and reviewer ratings

For readers unfamiliar with the principles of linear regression, a variable's coefficient describes the estimated influence of a small change in the variable on the answer

In the context of this methodology, this indicates that a doc*mentary film can anticipate to boost its critics' average rating by as much as 27.9 points

In terms of interpreting the models' results, we can observe that movie genres had a far bigger influence on each group's rating behaviour than the other factors we included in the model in both situations

Aside from a few outliers, many of the genres had similar effects on both groups' behaviour in terms of both direction and magnitude, with Doc*mentary and Animation films having the most beneficial influence on both groups' ratings

There are exceptions, such as Horror, which had a significantly greater negative impact on viewer numbers than it did on critics. While Westerns had a very little influence on both categories, the correlations ran in different directions: good for reviewers and negative for the audience

In support of the previous section's graph exhibiting average rating with time, the age variable yielded a positive coefficient for both groups, with a higher effect among critics. When compared to the genre variables, the model projected its impact to be low, as was the case with runtime
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z #
Copyright © 2012 - 2021 BeeLyrics.Net