IDG Answers is a community of experts who are passionate about technology. Ask a question or answer one below.
One of the big changes over the past few years is that servers now have much larger hard drives and faster processors; this has led to more data being captured for analysis. Where SQL falls short with data warehousing is making sense of the data that's captured and turning it into information. That's what's driving the move to study 'big data'. And there's little reason why a company couldn't run SQL the way they've been doing it, but then port out the data they want to manipulate into a tool like NOSQL and dig for nuggets of gold.
Of course it's not the end of the line for SQL-type databases. Even though there are new analytics needs which require a database to be optimized in different ways than through traditional SQL databases, SQL is so widely-used that it will probably still be running on servers in 10 years. If one looks at the databases that have fallen into disfavor over the past couple of decades (dBase, FoxPro, 4D, etc.) they were all proprietary and tied to a single main vendor. Since DB2, Sybase, Oracle, and Microsoft all have versions of their own SQL-style database, that won't be the case for SQL. And even the new competitors to SQL have to contend with the fact that there are many more people using open sourced-versions, like PostgreSQL and MySQL that it's easier to find developers and DBA's with SQL skills than who are trained in these other databases who have a much smaller installed userbase.