Cloud computing is a vision that is increasingly turning to reality for many companies. Enterprises, both small and big, are evaluating cloud computing and, in increasing numbers, are moving their IT infrastructure to the cloud. As a matter of fact 91%1 of IT organizations allocate at least some portion of their budget to the cloud as they continue to focus on service responsiveness and cost flexibility. Depth of deployment has also increased, with more than 33%1 of companies allocating at least 6% of spending to cloud solutions– up from just 23% in 2014. Forrester Research predicts that, by the year 2020, enterprises will be investing more than $241 billion in cloud computing each year – that’s six times what they’re spending today. But where does that leave our current investments for information management solutions, our existing relational data stores, data warehouses, business intelligence systems, and business applications that consume data? What impact will cloud have on the world of connecting your data sources?
The benefits of cloud computing range from lower data center costs, to significantly reduce environmental impact, to the ability to capture more of the opportunities that markets present through increased agility in resource deployment and dramatically reduced time to market. While the promised benefits of cloud computing can be immense, achieving them requires much more than simply connecting via an adapter to a software-as-a-service (SaaS) offering. These may be useful steps towards moving to a cloud computing blueprint, but on their own do not deliver cloud computing for the whole enterprise and its associated benefits. When embarking on a cloud journey organizations face multiple challenges in data movement and access. Some of the key ones are:
- How to move the data in legacy systems to the cloud environment without interrupting business operations?
- How to integrate multiple SaaS apps with different standards for analytical or transactional systems
- How to ensure timely data movement between cloud and on-premises systems to support business requirements
- How to ensure data quality when moving data between on-premises systems and cloud
Cloud application integration is growing quickly in importance. Industry experts often cite integration as one of the barriers to adoption of cloud services, especially for apps that need to exchange messages and data. Computerworld’s 2014 survey with IT professionals with public and private cloud experiences revealed an 11% increase in interest in technologies for cloud integration over the previous year. These organizations want to avoid rigid point-to-point connections between cloud-based services and on-premises infrastructure, which often ignore well-established integration principles. Custom interfaces among cloud apps are difficult to maintain and tricky to upgrade when endpoints change.
When it comes to enterprise applications and systems of record, synchronization between on-premises and cloud-based apps must be orchestrated in a consistent way. Customers need a simple yet powerful cloud integration platform that includes out of the box adaptors for ERP as well as for cloud apps like Workday, NetSuite, and Salesforce.
Changing Patterns of Information and Integration
Integration is changing around the evolving needs of information processing within the enterprise. For instance, what used to be well-structured data in applications, data warehouses, and other more traditional systems has given way to larger unstructured and structured data stores that may exist inside or outside of the enterprise’s firewall.
These changes, or, evolutions, are apparent, in terms of their place in enterprise IT. There is no going back to simpler times when the data was structured, took up much less space, and was loaded on traditional servers within the enterprise data center. Those days are long over. Older approaches to data integration, as well as older technology, can no longer provide the value that they once did.
IDC predicts that the total volume of enterprise data is expected to grow at the rate of 50% each year. By 2020, IDC predicts that the volume of data will reach around 40 Zettabytes (1 billion terabytes equals 1 Zettabyte). Another important fact about this gigantic amount of data is that 1 90% of it will be unstructured data.
The clear reality today is that mass data storage is something that enterprises must manage, along with integration services. This is something that most traditional data integration technologies are ill-prepared to do. At issue is the sheer volume of the data, and thus the ability to effectively process it.
Traditional approaches to application and data integration focused on simple extraction of data, and then the transformation of data, in terms of structure, so the data appears to be native to the target. This process was not designed for the massive amount of data that is presently stored and managed. Indeed, this approach to integration won’t be able to keep up with the volume of data that needs to be brought together to obtain the value of the information.
It’s not a matter of “if” we’re moving in new directions that will challenge your existing approaches to data integration, it’s “when.” Those who think they can sit on the sidelines and wait for their data integration technology provider to create the solutions they require will be very disappointed. Indeed, the likely case is that your legacy data integrating provider may not have viable technology to take them into the next generation, and thus they may join the world of dead technologies, as enterprise IT progresses too fast for them to keep up.