Technology, Tests, and the Art of the Essay

Should computers grade essays?

The Common Core is trying to shift the emphasis of education toward more complex forms of thinking. Evaluating more complex thinking, however, requires more complex forms of assessment. I think most people would agree that written essays are better indicators of a students’ understanding than multiple choice, fill-in-the-bubble tests, but they are also more time-consuming and expensive to mark. Multiple choice tests can be fed into a computer and instantly graded, whereas essays require a teacher or professor or test center professional to read and evaluate them. Or maybe they don’t.

edtech, assessment, education

Can computers properly assess essays?

Several studies in recent years, like this one for example, have shown that computers can mark essays with the same accuracy and consistency as humans. In fact, computers are often more consistent than human readers. As states struggle to put together new assessments, without breaking budgets, computerized essay grading holds some obvious attraction. Namely, it’s much cheaper.

But it is also controversial and it’s not hard to imagine why. Computers can’t really measure creativity or originality. And the values placed on certain features, like longer words and more complex phrases, open possibilities for manipulating the scoring system.  To see more on this argument, check out this statement from the National Council of Teachers of English (NCTE).

What do you think, dear readers? Do you see any problems with computers grading essays? Leave comments below!

Another Slice of PISA: Socioeconomic factors weigh heavily on US students’ test scores

I wrote about the OECD’s annual standardized test earlier this week, the PISA (Programme for International Student Assessment), and today I am going to write about it again. Repetition is the key to these standardized tests, you know. In this particular post, I’d like to take a look at a few of the more subtle details from the report released by the OECD together with the results.

PISA, OECD, socioeconomic status, education

The OECD’s PISA test shows that socioeconomic factors strongly influence educational outcomes.

The first of those details is a big one because it relates to money.  According to the OECD’s report, socioeconomic differences account for a 15% variance in the testing outcomes in the US, compared to less than 10% in Finland, Norway, Japan, and Hong Kong. In other words, two students from different socioeconomic backgrounds are more likely to perform differently on the PISA in the US than in most other OECD countries. And it is not the student on the lower end of the financial spectrum that performs best.

Also, the OECD has a special classification for super rad students that are among the lowest 25% socio-economically but still perform in the top 5% academically. They are called “resilient.” In the US, only 5% of low income students classify as resilient, compared to 7% on average and around 15% for Hong Kong, Macao, Shanghai, and Vietnam.

This seems like it should be a strong point for the US. American dream, social mobility, all that, but as it turns out, it is the opposite. Ironically, unlike many other countries, the US does not show a big difference in the student-teacher ratio or teacher education levels in lower performing schools. So what is that difference? Facilities maybe? After school programs?  I’d be very interested to hear any ideas in that big comment box below.

In terms of school performance, some interesting results came up as well. First, across all the OECD countries, schools with more autonomy tended to perform better. So, the more control the principal had, when combined with accountability measures and strong principal-teacher interaction, the better the students did.

At the same time, there was no cross-country evidence that competition among schools in any way contributed to better student performance. In other words, students at schools that compete to enroll more kids performed at the same level as students that are pretty much stuck with their schools.

On the positive side, or not really, the US did score well in opinions about our own math skills. This comes despite a below average ranking in math. For example, 69% of respondents felt confident in their ability to calculate figures such as the gas consumption rate of a car. The OECD average was 56%. So, at least we are confident. And have warning lights for low fuel.

Filling the inspiration gap

How can online learning help to fill the inspiration gap left by our declining education system?

Motivating students to pursue topics that interest them. Shocking fact of the day: students learn much better when they’re interested and engaged. Online education allows individualized learning and experimentation in a way that traditional learning cannot. That means students have the ability to learn what they want to learn. Sure, sometimes you have to learn things that you don’t like – and that’s where the online model offers more advantages…

Providing access to passionate, inspiring teachers. Given that a good online learning system is theoretically able to cast a worldwide net in terms of attracting talent, students benefit from the ability to interact with the best professionals in their desired field. Passionate teachers inspire passion in students.

Creating a structure where grades and examinations are secondary to real learning.  At Rukuku, we believe that grades and exams shouldn’t be a purpose in and of themselves. When one takes his own initiative rather than being nudged (read: forced) to take a class for a grade, actual learning becomes the priority.

Another bonus of online learning: none of those inspiration-killing standardized tests!