Cloud Adoption: How Streaming Integration Minimizes Risks

 

 

Last week, we hosted a live webinar, Cloud Adoption: How Streaming Integration Minimizes Risks. In just 35 minutes, we discussed how to eliminate database downtime and minimize other risks of cloud migration and ongoing integration for hybrid cloud architecture, including a live demo of Striim’s solution.

Our first speaker, Steve Wilkes, started the presentation discussing the importance of cloud adoption for today’s pandemic-impacted, fragile business environment. He continued with the common risks of cloud data migration and how streaming data integration with low-impact change data capture minimizes both downtime and risks. Our second presenter, Edward Bell, gave us a live demonstration of Striim for zero downtime data migration. In this blog post, you can find my short recap of the key areas of the presentation. This summary certainly cannot do justice to the comprehensive discussion we had at the webinar. That’s why I highly recommend you watch the full webinar on-demand to access details on the solution architecture, its comparison to batch ETL approach, customer examples, the live demonstration, and the interactive Q&A section.

Cloud adoption brings multiple challenges and risks that prevent many businesses from modernizing their business-critical systems.

Limited cloud adoption and modernization reduces the ability to optimize business operations. These challenges and risks include causing downtime and business disruption and losing data during the migration, which are simply not acceptable for critical business systems. The risk list, however, is longer than these two. Switching over to cloud without adequate testing that leads to failures, working with stale data in the cloud, and data security and privacy are also among the key concerns.

Steve emphasized the point that “rushing the testing of the new environment to reduce the downtime, if you cannot continually feed data, can also lead to failures down the line or problems with the application.” Later, he added that “Beyond the migration, how do you continually feed the system? Especially in integration use cases where you are maintaining the data where it was and also delivering somewhere else, you need to continuously refresh the data to prevent staleness.”

Each of these risks mentioned above are preventable with the right approach to data movement between the legacy and new cloud systems.

 

Streaming data integration plays a critical role in successful cloud adoption with minimized risks.

A reliable, secure, and scalable streaming data integration architecture with low-impact change data capture enables zero database downtime and zero data loss during data migration. Because the source system is not interrupted, you can test the new cloud system as long as you need before the switchover. You also have the option to failback to the legacy system after switchover by reversing the data flow and keeping the old system up-to-date with the cloud system until you are fully confident that it is stable.

CDCInitialLoad.png” alt=”” width=”1902″ height=”958″ />

Striim’s cloud data migration solution uses this modern approach. During the bulk load, Striim’s CDC component collects the source database changes in real time. As soon as the initial load is complete, Striim applies the changes to the target environment to maintain the legacy and cloud database consistency. With built-in exactly once processing (E1P), Striim can avoid data both data loss and duplicates. You have the ability to use Striim’s real-time dashboards to monitor the data flow and various detailed performance metrics.

Continuous, streaming data integration for hybrid cloud architecture liberates your data for modernization and business transformation.

Cloud adoption and streaming integration are not limited to the lifting and shifting of your systems to the cloud. Ongoing integration post-migration is a crucial part of planning your cloud adoption. You cannot restrict it to database sources and database targets in the cloud, either. Your data lives in various systems and needs to be shared with different endpoints, such as your storage, data lake, or messaging systems in the cloud environment. Without enabling comprehensive and timely data flow from your enterprise systems to the cloud, what you can achieve in the cloud will be very limited.

“It is all about liberating your data.” Steve added in this part of the presentation. “Making it useful for the purpose you need it for. Continuous delivery in the correct format from a variety of sources relies on being able to filter that data, transform it, and possibly aggregate, join and enrich before you deliver to where needed. All of these can be done in Striim with a SQL-based language.”

A key point both Edward and Steve made is that Striim is very flexible. You can source from multiple sources and send to multiple targets. True data liberation and modernizing your data infrastructure needs that flexibility.

Striim also provides deployment flexibility. In fact, this was a question in the Q&A part, asking about deployment options and pricing. Unfortunately we could not answer all the questions we received. The short answer is: Striim can be deployed in the cloud, on-premises, or both via a hybrid topology. It is priced based on the CPUs of the servers where the Striim platform is installed. So you don’t need to worry about the sizes of your source and target systems.

There is much more covered in this short webinar we hosted on cloud adoption. I invite you to watch it on-demand at your convenience. If you would like to get a customized demo for cloud adoption or other streaming data integration use cases, please feel free to reach out.

Mitigating Data Migration and Integration Risks for Hybrid Cloud Architecture

 

Cloud computing has transformed how businesses use technology and drive innovation for improved outcomes. However, the journey to the cloud, which includes data migration from legacy systems, and integration of cloud solutions with existing systems, is not a trivial task. There are multiple cloud adoption risks that businesses need to mitigate to achieve the cloud’s full potential.

 

Common Risks in Data Migration and Integration to Cloud Environments

In addition to data security and privacy, there are additional concerns and risks in cloud migration and integration. These include:

Downtime: The bulk data loading technique, which takes a snapshot of the source database, requires you to lock the legacy database to preserve the consistent state. This translates to downtime and business disruption for your end users. While this disruption can be acceptable for some of your business systems, the mission-critical ones that need modernization are typically the ones that cannot tolerate even planned downtime. And sometimes, planned downtime extends beyond the expected duration, turning into unplanned downtime with detrimental effects on your business.

Data loss: Some data migration tools might lose or corrupt data in transit because of a process failure or network outage. Or they may fail to apply the data to the target system in the right transactional order. As a result, your cloud database ends up diverging from the legacy system, also negatively impacting your business operations.

Inadequate Testing: Many migration projects operate under tense time pressures to minimize downtime, which can lead to a rushed testing phase. When the new environment is not tested thoroughly, the end result can be an unstable cloud environment. Certainly, not the desired outcome when your goal is to take your business systems to the next level.

Stale Data: Many migration solutions focus on the “lift and shift” of existing systems to the cloud. While it is a critical part of cloud adoption, your journey does not end there. Having a reliable and secure data integration solution that keeps your cloud systems up-to-date with existing data sources is critical to maintaining your hybrid cloud or multi-cloud architecture. Working with outdated technologies can lead to stale data in the cloud and create delays, errors, and other inefficiencies for your operational workloads.

 

Upcoming Webinar on the Role of Streaming Data Integration for Data Migration and Integration to Cloud

Streaming data integration is a new approach to data integration that addresses the multifaceted challenges of cloud adoption. By combining bulk loading with real-time change data capture technologies, it minimizes downtime and risks mentioned above and enables reliable and continuous data flow after the migration.

Striim - Data Migration to Cloud

In our next live, interactive webinar, we dive into this particular topic; Cloud Adoption: How Streaming Data Integration Minimizes Risks. Our Co-Founder and CTO, Steve Wilkes, will present the practical ways you can mitigate the data migration risks and handle integration challenges for cloud environments. Striim’s Solution Architect, Edward Bell, will walk you through with a live demo of zero downtime data migration and continuous streaming integration to major cloud platforms, such as AWS, Azure, and Google Cloud.

I hope you can join this live, practical presentation on Thursday, May 7th 10:00 AM PT / 1:00 PM ET to learn more about how to:

  • Reduce migration downtime and data loss risks, as well as allow unlimited testing time of the new cloud environment.
  • Set up streaming data pipelines in just minutes to reliably support operational workloads in the cloud.
  • Handle strict security, reliability, and scalability requirements of your mission-critical systems with an enterprise-grade streaming data integration platform.

Until we see you at the webinar, and afterward, please feel free to reach out to get a customized Striim demo for data migration and integration to cloud to support your specific IT environment.

 

Top 4 Highlights from Our Streaming Data and Analytics Webinar with GigaOm

 

 

On April 9, 2020, Striim’s co-founder and CTO Steve Wilkes joined GigaOm’s analyst Andrew Brust (bio) in an interview-style webinar on “Streaming Data: The Nexus of Cloud Modernized Analytics.” GigaOm and Striim Webinar SpeakersOver the course of the hour, the two talked about the evolution of data integration needs, what defines streaming data integration, capturing transactional data through change data capture (CDC), comparative approaches for data integration, where companies typically start with streaming data, use case examples, how it supports cloud initiatives, providing a foundation for operational intelligence, and even its role in AI/ML advancements.

While we can’t cover it all in one blog post, here is a “top 4” list of our favorite things highlighted during the webinar — and we invite you to view the entire on-demand event by watching it online

 

#1: “Today, People Expect to Have Up-to-the-Second Information” — Steve Wilkes

Andrew asked Steve to do a bit of “wayback machine” to trace how we arrived at the need for streaming, real-time data. “Twenty years ago, most data was created by humans working on applications with data stored in databases, and you’d use ETL to move and store the data in batches into a data warehouse. It was OK to see data hours or even days later, and everyone did that,” said Steve. But fast-forward to our daily lives today and how we get immediate updates on things like Twitter feeds, news alerts, instant messaging with friends, and expectations have changed.

“So the business world needs to work the same way, and this does drive competitive pressures,” he continued. “If you’re not having this view into your operations and what your customers need, someone else will and they can push you out of business.”

Related to this, Andrew said later in the webinar: “We have new modes of thinking. But using older modes of technology, we’re going to run into issues.”

GigaOm: Old vs New Approaches to Data Movement

#2: Cloud Adoption Driving the Need for Streaming Data

As Steve noted, there’s been a significant shift from all on-premises systems to cloud-based environments, but there is still the need to get data into the cloud in order to get use from it.

Steve shared with Andrew that what Striim sees across its global customer case in terms of adoption is that the majority have a first goal of building the ability to stream their data first and then use it to power the analytics.

“Initial use cases are often zero-downtime data migrations to cloud or feeding a cloud-based data warehouse…. Once they’ve stream-enabled a lot of their sources, they will start to think about what analytics they can promote to real time and where they can get value out of that,” said Steve.

 

#3: A Range of Business Use Cases

Throughout the webinar, Andrew mentioned a few possible use cases, particularly in the context of the global pandemic being faced. “There’s nothing more frustrating, especially in these times of lockdown, when it says something is in stock and then you go to confirm the purchase and it says it’s out of stock … or you find out later.”

From Steve: “That real immediacy into what customers are doing, need, and want is key to what streaming data can do.”

Another example Andrew used illustrated the need for operational intelligence using real-time data. He referenced his home state of New York as it faces the coronavirus pandemic, where the real-time sharing of data about medical supplies and personnel data across the state’s hospitals could improve decisions to best allocate and redistribute those assets.

Shifting to the analytics side, Steve described operational intelligence as being able to change what you know about your operations and the decisions you make, based on current information. He gave the example of being able to track down critical devices, such as wheelchairs, in settings such as airports and hospitals.

The two also discussed how streaming data fits with AI/ML, where Steve commented how streaming data can be used to get data ready and processed for AI models to improve efficiency and performance.

 

#4: Status of Streaming Data

Andrew polled attendees with the question of where they are today with having streaming data in their organization.

GigaOm Poll: Use of Streaming Data In Your Organization

At least half of the attendees said they are using streaming data at least occasionally, which suggests that streaming data integration will continue to grow in popularity and ubiquity. Another 25% are currently evaluating streaming data technology.

Andrew asked Steve for his thoughts on the 15% who felt they don’t have a need for streaming data. As Steve commented: “A lot of organizations have a perception of what a real-time application is and the categories of use cases they are good for. But if you are moving applications to the cloud and they are business-critical, if you can’t turn them off for a few days, how do you do that without turning it off when data is still changing. There’s a need for real-time streaming data there.”

As you can see, the two covered a lot of ground — and so much more during this interactive webinar event. It is available to watch on demand at your convenience, so please check it out. We thank GigaOm and Andrew Brust for hosting this engaging program.

Also, you can learn more about the topic of Streaming Integration in a new 100+ book published by O’Reilly Media and co-authored by Steve Wilkes, who was the speaker of this webinar. Download your free PDF copy today.

 

Streaming Data: The Nexus of Cloud-Modernized Analytics

 

 

On April 9th I am going to be having a conversation with Andrew Brust of GigaOm about the role of streaming integration in digital transformation initiatives, especially cloud modernization and real-time analytics. The format of this webinar is light on power-point, rich on lively discussion and interaction — so we hope you can join us.

Streaming Data: The Nexus of Cloud-Modernized Analytics

APR 9, 2020- 10:00 AM PDT/ 1:00 PM EDT

Digital transformation is the integration of digital technology into all areas of a business resulting in fundamental changes to how the businesses operate and how they deliver value to customers. Cloud has been the number one driving technology in a majority of such transformations. It could be you have a cloud-first strategy, with all new applications being built in the cloud, or you may need to migrate online databases without taking downtime. You may want to take advantage of cloud-scale for infinite data storage, coupled with machine learning to gain new insights and make proactive decisions.

In all cases, the key component is data. The data for your new applications, cloud analytics, or your data migration could originate on-premise, in another cloud or be generated from millions of IoT devices. It is essential that this data can be collected, processed, and delivered rapidly, reliably and at scale. This is why streaming data is the key major component of data modernization, and why streaming integration platforms are vital to the success of digital transformation initiatives.

In a modern data architecture, the goal is to harvest your existing data sources and enable your analysts and data scientists to provide value in the form of applications, visualizations, and alerts to your decision makers, customers, and partners.

In this webinar we will discuss the key aspects of this architecture, including the role of change data capture (CDC) and IoT technologies in data collection, options for data processing, and the differing requirements for data delivery. You will also learn how streaming integration platforms can be utilized for cloud modernization, large scale and stream analytics, and machine learning operationalization, in a reliable and scalable way.

I hope you can join us on April 9th, and see why streaming integration is the engine of data modernization for digital transformation.

 

Take Apache Kafka to the Next Level

Apache Kafka has become the de facto standard for providing a unified, high-throughput, low-latency platform for handling real-time data feeds.  With a Kafka-based infrastructure in place, it’s time to take Kafka to the next level. Join us on Thursday, March 15, at 11am PT or 2pm ET for a half-hour discussion on the top 5 reasons to up the ante on your Kafka implementation.

Reason #1: Continuous Ingestion into Kafka

While Apache Kafka is gaining great traction as a fault-tolerant, high-speed open source messaging solution, the value users gain is limited to the data sources they can feed into Kafka in a streaming fashion.

Join this 30-minute webinar to see how easy it can be to use pre-built wizards to ingest and integrate data into Kafka from a wide variety of sources including:

  • Databases via non-intrusive change data capture (ie, Oracle, MS SQL, MySQL, HPE NonStop, Maria DB)
  • Files (ie, log files, system files, batch files)
  • Big Data (ie, HDFS, Hbase, Hive)
  • Messaging (ie, Kafka, Flume, JMS, AMQP)
  • Cloud (ie, Amazon S3, AWS RDS, Salesforce)
  • Network
  • Sensors and IoT Devices

Reason #2: SQL-Based Stream Processing on Kafka

Over the past 6 months, a lot has been written about how the future of Apache Kafka will incorporate stream processing.

The future is now. Striim has been providing enterprise-grade, SQL-based stream processing for Kafka for 2 years in large and business-critical environments worldwide.

Join this webinar to learn how easy it can be to provide SQL-based Kafka stream processing, as well as for other data pipelines, with capabilities including:

  • Filtering
  • Masking
  • Aggregation
  • Transformations
  • Enrichment with dynamically changing data
  • Time Windowing

Reason #3: Making Apache Kafka Applications Enterprise-Grade

Chances are, you are using Apache Kafka for messaging, but not for building analytics applications. Why? If you’re like many companies, the open source stream processing solutions around Kafka simply don’t hold up in production.

Striim’s patented technology is widely used for enterprise-grade, SQL-based stream processing for Kafka, and delivers complex streaming integration and analytics for one of the largest Kafka implementations in the world.

Join this 30-minute webinar to learn how you can build next-generation Kafka-based analytics applications with built-in HA, scalability, recovery, failover, security and exactly-once processing guarantees.

Reason #4: Kafka Integration via Streaming

Apache Kafka is neither a source, nor a destination. To truly leverage Kafka, it is necessary to integrate both from streaming data sources into Kafka, and from Kafka to a variety of targets, in a streaming fashion in real time.

Striim’s patented technology is able to continuously collect data from a wide variety of sources (such as enterprise databases via log-based change data capture, log files and sensors) and move that data in real time onto a Kafka queue. In addition, Striim can continuously collect data from Kafka and move it in real time to targets including a broad range of Big Data, database environments, on-premises or in the Cloud.

Join this 30-minute webinar to learn how you can easily integrate your Kafka solution with a broad range of streaming data sources and enterprise targets.

Reason #5: Kafka Visualization via Real-Time Dashboards

Sometimes, all you need is visibility.

With Striim, you can continuously visualize data flowing in a Kafka environment via real-time, push-based dashboards. To get even more value, you can easily perform SQL-based processing of the data in-motion and provide the best possible views into your streaming data.

Join us to learn how easy it can be to incorporate stream processing and real-time visualizations into your Kafka solutions.

Take Kafka to the Next Level
Streaming Integration and SQL-Based Stream Processing for Apache Kafka Environments
March 15, 2018
11:00 – 11:30am PT / 2:00 – 2:30pm ET
Register Today!

Taking Financial Services Operations to the Next Level – Live Webinar

We are excited to announce our upcoming live webinar, presented by Striim co-founder and CTO, Steve Wilkes.

Live Webinar: Taking Financial Services Operations to the Next Level
Using Streaming Analytics

Thursday, October 26, 2017
10am PDT/1pm EDT/5pm UTC
Register

Today’s financial services institutions face many operational challenges that range from strict government regulations and macro-economic conditions, to the rising demands of customers, to increasingly complex processes. To stay ahead of the game, the financial services industry is consistently driven to improve its operations using its data assets and the latest cutting-edge technologies.

These technologies are often required to store, process and analyze years of highly classified, transactional data from multiple sources. Furthermore, the data is also required to be readily accessible, in order to deliver real-time insights into customer experience and mitigate potential risks.

Given these requirements, streaming integration and analytics is one of the new key technologies that has transformed the way banks and insurance companies make smarter operational decisions with their high-volume, high-velocity data. By using streaming integration and analytics, these companies can make real-time decisions on time-sensitive issues, detect and monitor potential system anomalies in activities and deliver best-in-class service and offerings to its customers in a timely manner.

In this live webinar, Steve will present several real-world case studies that utilize streaming analytics to illustrate how companies can outsmart risks, ensure regulatory compliance, and seize opportunities to improve customer experience.

Live Webinar: Taking Financial Services Operations to the Next Level Using Streaming Analytics

Thursday, October 26, 2017
10am PDT/1pm EDT/5pm UTC
Register

Steve will present real-life examples of popular banking and insurance use cases including:

    • Fraud, money laundering and other financial crime detection and prevention
    • Predictive ATM maintenance
    • Global transaction monitoring
    • Proactive customer service and real-time customer offers

Join us at this live webcast to learn how your peers in the banking and insurance companies are using streaming analytics solutions to make faster, better, automated decisions.

Sign up today!

The Best of Both Worlds: Hybrid Open Source Data Management Platforms

Please join our upcoming webinar with Tony Baer from Ovum where we will discuss the pros and cons of a “hybrid open source” strategy vs. an “open source first” strategy.

In the Big Data era, open source technologies have seen increased adoption, with an enormous degree of impact on the entire technology ecosystem. For example, many of the programming frameworks, including Hadoop and certain NoSQL databases, are all available as open source.

Together these two frameworks cover a large portion of both compute engines and algorithms. While many enterprises who support open source projects continue to use vendor-specific technology solutions, an “open source first” strategy has also been adopted by many enterprises as well.

Webinar: The Real Costs and Benefits of Open Source Data Platforms

Wednesday, September 13, 2017
11am PDT/2pm EDT/6pm UTC
Register

Though not the only reason, cost savings is among the top reasons for the increased adoption of open source. Historically, open source has been thought of as “free” software, as the source code is readily available and can be downloaded from a variety of different sources. But oftentimes, this “free” software has significant costs associated with it in the form of support, integration, updates, and maintenance.

All in all, is an “open source first” strategy more cost-effective than hybrid open source strategies? How do enterprises know where to draw the line? Are certain technologies better suited for open source compared to others? And, why do we increasingly see a hybrid approach that blends open source with proprietary software as the new norm for enterprises?

To weigh in on this discussion, Tony Baer, Principal Analyst, Information Management for Ovum, and Steve Wilkes, CTO of Striim, will be taking a closer look at the true costs and benefits of using open source data platforms in an upcoming webinar on September 13.

They will first examine how a hybrid approach avoids reinventing the wheel for commodity technologies and focuses on delivering unique IP. Further, they will also discuss how a hybrid approach brings together the best of both worlds for enterprises that want cost savings and enterprise-grade solutions.

Don’t miss out on your chance to participate in this discussion with Tony and Steve, and ask your questions live:

Additionally, all webinar attendees will receive a complimentary copy of the new Ovum whitepaper – The Real Costs and Benefits of Open Source Data Platforms – written by Tony Baer.

Register today!

 

The Critical Role of a “Streaming First” Data Architecture – Webinar

Please join us for our upcoming webinar on The Critical Role of a “Streaming First” Data Architecture in 2017, presented by our co-founder and CTO, Steve Wilkes.

IDC’s recent prediction that the world will be creating 163 zettabytes (or 163 trillion gigabytes) of data a year by 2025 was shocking. What’s more astounding is how little of that data can be stored, with some estimates as low as 5%.

In order to handle this data deluge, companies must begin the transition now to a “streaming first” data architecture, moving as much data processing and analytics in-memory to reduce the amount of data they need to store, and to gain insights at the speed of their business.

Webinar: The Critical Role of a “Streaming First” Data Architecture in 2017

Wednesday, August 30

Watch On-Demand

In this 30-minute live webinar discussion, Steve will:

  • Discuss the critical role of a “Streaming First” data architecture for real-time insights and immediate actions
  • Examine the strategies for seamlessly transitioning to a “streaming first” data architecture without disruption
  • Introduce the Striim data platform and provide an overview of its key capabilities
  • Learn about real-world case studies of how streaming analytics is being used by Fortune 500 companies to make smart and timely decisions

We hope you can join us!

Data Security with Streaming Analytics: Webinar Featuring Guest Analyst

Stephanie+Steve-smallWe are pleased to announce our upcoming webinar on Data Security with guest speaker Stephanie Balaouras, Forrester Vice President and Research Director serving Security and Risk professionals.

All webinar attendees will receive a complimentary copy of Forrester’s December 2016 Data Security Benchmark Report: Understand the State of Data Security and Privacy: 2016 To 2017.

For years, security leaders have played a game of escalation with attackers by countering their techniques with a growing arsenal of data security products and services. However, expanded security budgets and deepening defenses have still not kept intruders out of the network, or stopped malicious insiders. Legacy, siloed rules-based technologies are unable to keep pace, delivering a deluge of alerts to be validated, or potentially ignored due to the overwhelming volume.

Detecting and defending against cyberattacks and fraud requires fast analysis of large, diverse data sets. Today, security pros are turning to streaming analytics platforms to correlate data in real time, and recognize behavior patterns that could indicate malicious activity. With streaming analytics, security teams reap benefits with higher detection rates, fewer false positives, and rapid response.

Webinar: Counteract Cyberattacks and Fraud with Streaming Analytics
Wednesday, December 14
11 a.m. PST / 2 p.m. EST / 7 p.m. GMT

Join guest speaker Stephanie Balaouras, Forrester Vice President and Research Director serving Security and Risk professionals, and Steve Wilkes, co-founder and CTO of Striim, to:

  • Discuss why today’s security challenges are really an analytics challenge
  • Describe how streaming analytics solutions are combating external attacks, malicious insiders, and fraud
  • Distill the benefits of streaming analytics solutions to security and other initiatives in the organization

Real-world case studies of how streaming analytics is being used to prevent fraud and cyberattacks will also be shared.

Forrester Data Security Benchmark Report

All webinar attendees will receive a complimentary copy of Forrester’s December 2016 Data Security Benchmark Report: Understand the State of Data Security and Privacy: 2016 To 2017.

Based on two recent security-related surveys by Forrester, this benchmark report provides current insight into the most common ways company data is breached, what types of data are breached, and what forms of attack are on the rise.

CTO webinar with Steve Wilkes

Striim Weekly CTO Webinar with Steve Wilkes

In addition to having some of the best streaming technology out there, Striim also has a reputation for being extremely available and responsive to both customers and potential customers. In that vein, Striim’s founder and CTO, Steve Wilkes, makes himself available every Wednesday for a live weekly CTO Webinar.CTO webinar with Steve Wilkes

Here is a brief overview of what the weekly webinar covers, including a Live Demo of the Striim platform:

  • Company overview, genesis, founding, funding
  • Product purpose, industry uptake, common use cases
  • What can one do with the platform
  • Data collection, change data capture, sources, targets, data lakes, windows, aggregates
  • Data processing, data enrichment, event patterns, data correlation
  • Analyze results, predictive analytics, and visualization with dashboards
  • Key differentiators
  • Live demo
  • Q&A with Steve Wilkes

From initial collection of data, processing, delivery, analysis, alerting, and visualizations, we invite you to speak 1:1 with Steve to learn more about how we can help you with your streaming architecture.

weekly webinar


Complete video transcript of a 2016-04 CTO weekly webinar with time stamps:

0:00 good morning thank you all for joining I’m Steve Wilkes and I’m gonna take about half an hour of
0:04 your time to go to a brief overview of the Striim platform and a quick demo to
0:09 show you all the functionality
0:13 Striim was founded in 2012 while and the four of us founded the company we came
0:19 out of Golden Gate software and Golden Gate was or is still the number one
0:26 technology for moving data between databases transactional data using
0:31 Change Data Capture when we’re talking to customers at Golden Gate they were
0:35 pretty often asked us where you are good at moving data between databases can we
0:39 look at that data while it’s moving can we analyze it and those kind of the genesis
0:44 of the company when we set out to build an end and platform that could collect
0:49 high-speed streaming data process it, analyze it, deliver it somewhere else and then
0:57 visualize that data and alert off it those four years later on will fulfill that
1:02 mission and that is our vision we backed by leading investors like summit partners and
1:09 Intel Capital and very recently Atlantic Bridge came in and extended
1:16 series B round
1:19 the technology is a mature form, we are currently on version 3.2.4 of the
1:25 platform with more releases coming this year most of our customers in finance
1:32 telco retail gaming we have some customers are interested in is another
1:38 things it’s pretty varied and across the board
1:44 the Striim platform provides streaming integration and intelligence and this
1:50 enables you to do streaming analytics we emphasize streaming integration
1:55 because if they recognized that you cannot even start to analyze data that
2:00 you can’t get to the streaming integration is all about being able to
2:04 collect data in a high-speed fashion as that data is being produced and all of this
2:10 enables you to make it a useful as soon as it’s created
2:16 the goal of the product is to provide an end-to-end solution to make it easy for
2:23 people to do real-time data collection and streaming integration of high-speed
2:28 data and then to be able to build on that
2:31 be able to aggregate data correlate it analyze it than to you and visualize and
2:38 reports up it but be able to do all with enterprise-grade product that has built
2:45 in reliability scalability security all the things you expect from Enterprise
2:51 Products
2:56 so
2:58 a lot of different use cases that is thinking of when it comes to when it
3:03 comes to data you may be building and data Lake you may have requirements
3:08 provide data to external parties or internal customers as a service on
3:13 demand you might be looking at handling huge amounts of IOT data or monitoring your
3:20 infrastructure or equipment or even database replication in in in real
3:26 time
3:27 understand what’s really going on but you may have a mandate to improve
3:31 customer experience by understanding what your customers when they know it or
3:36 ensuring that you meet customer SLA’s looking for things like fraud or other
3:42 types of unusual behavior that can be damaging to your customer experience or
3:46 damaging to your company so we truly believe that whatever type of use cases
3:53 doing in the data space when you’re creating an overall enterprise data
3:58 strategy you need to sync streaming first and streaming integration has to
4:03 be part of your overall data strategy and overall data infrastructure is if
4:08 you doing things in a about fashion and you can’t move to real time later if you
4:14 do everything the streaming passion than your favorite position to start handing
4:19 some of these more real-time types of applications in the future
4:26 the platform as I mentioned is a full end-to-end platform
4:31 you think of it that we handle all the plumbing we handled the difficult parts on
4:37 parts of the application that you want to build so all of the enterprise grade stuff
4:41 stuff scalability the security the reliability that’s all handled by a
4:46 platform
4:48 in addition obviously we provide all the functionalities enables you to collect
4:54 data as it’s being produced in a streaming fashion to process that data
4:58 in a whole variety of different ways to deliver it to other systems to
5:03 manipulate that data and to enrich it with context to visualize it up and to
5:10 do all of that through a very simple declarative interface through a UI drag
5:16 and drop UI and enable you to define all of the processing within your data
5:22 the pipelines as he who like language is SQL like language that you could have
5:29 special difficulty we found that most developers with the Java or Python as C+
5:37 shopped whatever they also understand sequel as the business analysts and as
5:42 we take the scientists so SQL is kind of the common language that enables
5:47 people to understand the process to build their processing quickly
5:52 the simplest thing that you can do with our platform is moved at high speeds
5:57 from one place to another so that starts with collecting data collecting it in in
6:03 real time as it’s been produced and turning into data streams to some things
6:08 are obviously streaming if you think about message queues JMS Kafka or
6:14 flume they do continuous push production of data that naturally streaming if you
6:22 think of things like log files if you’re moving log files in batches wait for files
6:28 we finish the ship it that’s not streaming and depending on the
6:32 granularity of the file is now is an hour behind so
6:38 for things like log files you need to have parallel collection of data as is
6:43 being which is you need to reader it that way to the end of the log in as new data
6:47 is being written you start streaming that through it seems like databases not
6:51 many people think of those are streaming at all
6:54 but in reality the only real way getting data from Enterprise ltp systems in a
7:02 reasonable time frame is to use a streaming technology exchange data
7:06 capture which looks at all of the activities happening the database will
7:11 all of the inserts updates delays happen in a database and to capture those from
7:16 the transaction as the recovery and turned into a change stream so then
7:22 change stream can then be turned in real time as a real-time view of what’s
7:28 happening in the database and in reality CVC is the only way to get
7:33 this type of data from production databases most DBA’s won’t allow you to
7:37 example to run large full table scans your queries against the whole table on
7:43 a production database they’ll say to create a read only instance or an
7:49 operational data store some other way of getting data from that database and typically that
7:54 mechanism of creating that reason instance or appreciate this story uses
7:58 changed so we have built-in Change Data Capture for Oracle for MSQL for MySQL
8:06 and for the HP non Stop sytems you get all of that change has changed
8:13 when you think about sensors, they can deliver data in a whole variety of
8:18 different ways through various ports a TCP UDP HTTP and various protocols I kemp
8:23 et and
8:26 the real goal is to enable some degree edge processing in not necessarily
8:31 wanting to collect all the data from sensors you want to be able to do some
8:35 processing on the edge to reduce the amount of data that you’re sending her
8:39 into your core data center and you may want to reduce redundancy and you may
8:45 want to also look for a particular patterns at the edge where things are
8:49 happening let’s be able to work on very quickly
8:53 so independent of whatever you’re collecting whatever you what you end up
8:57 with a data streams and these data streams can then using a platform be
9:03 delivered into other things they can be delivered in Hadoop, NoSQL technologies,
9:08 can be delivered into the cloud into message queues like Kafka again
9:13 JMS and into databases data warehouses and that happens from collection of
9:21 the data through to delivery at the data typically in milliseconds
9:27 but for most customers they need to do additional work we do have some
9:32 customers just doing Change Data Capture from Oracle for example and delivering
9:38 that into Kafka or pushing into Hadoop and doing all of that in milliseconds
9:43 Christmas within typically build some additional monitoring such fabrications
9:47 maybe some undertaking and threshold on that simple data flow that’s the core of
9:54 the use cases to move data in real-time to ensure that their data lakes
9:58 up-to-date or the people getting up to date information on Kafka
10:02 but typically you need to do additional work additional processing on that later
10:07 so as a number of different types of basic operations you do on this
10:10 streaming data and in our platform for all of this is done through continuous in
10:15 memory queries so the data flows through queries you’ve defined and
10:21 continually producing results there’s no notion of jobs or batches or anything like
10:25 that it’s all just happening continually so with continuous queries written a
10:30 sequel like language you can do filtering you can do
10:35 data transformation if you wanna do aggregation you need to add in construct
10:41 called a window and they see turns the unlimited unbounded stream into a
10:47 manageable set may be the last five minutes worth of data or the last
10:52 hundred records or a combination of both
10:54 that allows you to do things like calculate aggregate over that moving
10:58 window like ways the last five minute moving average for each of these values and
11:04 then to compare those values do statistical analysis or just create aggregates and
11:08 then store them somewhere rather than storing the raw data
11:12 that’s really where windows come in you need those to know to create useful aggregates
11:17 the final thing you do is to enrich it and I was like
11:22 virtually every one of our customers is doing some degree of enrichment of the
11:27 original data and think about it this way that the source data may not have
11:32 all of the context necessary it’ll just make decisions either within a platform
11:38 or if you land that data into NoSQL or into Kafka or into Hadoop, you have to
11:46 ask the question is the data that I delivered there gonna have enough context to
11:50 notify me to ask questions of its run queries and if the answer is No
11:55 which it typically is then you probably gonna have to do some enrichment of that
12:00 data
12:02 and so example of that would be that we have a customer collecting call
12:07 detail records in the telco space these call detail records have a
12:12 subscriber identifyer in them on the raw call detail records you can look at the
12:18 overall network and see what’s happening you can set thresholds and alerts has
12:23 more than a certain number of dropped calls for example a certain location you
12:28 can deal with it but this customer wants to be able to look at the network from a
12:34 customer perspective each customer has their own set of expectations of the network
12:40 that different plans they may be paying for and different SLA’s for reliability number
12:47 of dropped calls network speed etcetera
12:50 by looking at the basic network you can’t see that but if you in real time
12:56 join those raw data records with customer information
13:02 so enrich the data as it is flowing through now you start to look at the network
13:07 from a customer perspective and so does typically where enrichment comes in is to
13:12 add additional context data is maybe to enrich with the results of some modelling you’ve done
13:20 European customers propensity to do things in order to help you make
13:24 predictions and all of this enrichment happens in real time very high speed we
13:30 particularly architectured the platform as it scales to manage the enrichment process
13:38 in a really efficient way but by bringing
13:41 events are being processed to the data which is the only real way of enabling
13:46 this high throughput while doing the enrichment
13:51 once you have a streaming integration in place when she collecting data and
13:55 you’re essentially delivering some routes or processing it then you start
14:01 to think about doing things like more intelligence start to look for patterns
14:06 of events over time for example which is something from their complex event
14:11 processing world were you looking for sequences of events over 1 or more source
14:16 within certain time frames that may indicate something interesting and you
14:22 can also look for outliers things it looks like that anomalies or something
14:26 unusual and you can do correlation of data where you’re matching data from one
14:32 stream with another stream or matching it with some external
14:36 context and we do have a customer for example there is correlating across
14:44 multiple different types of log files that they have because each log file
14:50 may contain certain occurrences of problem but it’s when those problems
14:57 occur at the same time or within a set time frame they really big issue and so
15:02 can you really do that but mostly correlation in real-time systems
15:08 actually happening because it’s really hard to after the fact with moving
15:12 time windows across multiple files
15:17 when you have the results of any of this processing you can write it as any
15:22 of the external systems of the support but you can also store it in a little
15:27 store so we have our own internal result store by default it’s backed by
15:32 elasticsearch it scales as well our platform scales over a custom everything
15:37 he goes in there is pre index which means results come back really quickly and
15:42 then you can further analyze these results you need to feed those results back into
15:46 further processing and use those results for things like predictive analytics
15:52 on top of all of that we do have the ability to build dashboards and in
15:56 real-time visualizations of data which enable you to see what’s happening in
16:02 real time and everything from initial collection of data processing delivery
16:07 analysis alerting and visualizations all streaming so everything is being pushed
16:14 all the way through to the dashboards in real time
16:20 so stream has been designed not require any additional software JVM Java Virtual
16:29 Machine so it runs on commodity hardware sales really well as part of a cluster
16:34 but it has been designed to integrate really well with other things that you
16:40 may have in your infrastructure so while it doesn’t require Hadoop to run it integrates
16:45 really well with it it doesn’t require an external Kafka system to run that integrates
16:51 really well Kafka and Flume and some of the other technologies you may have
16:55 Striim really plays this part for real-time applications and a real-time
17:00 view into what’s happening real-time analysis is working really well with
17:05 maybe a big data and long-term analytics applications or even your legacy
17:09 applications may be running on a ODS or enterprise data warehouse
17:14 and it’s a two-way conversation we can deliver things into these places in too
17:19 deep into the ODS we can also use the information in Hadoop or ODS for
17:24 context which we can use to enrich and enhance the whole real-time
17:28 applications
17:31 the things that really makes this different that we are an end-to-end
17:35 platform enables this real-time streaming integration on which you
17:41 can then build analytics and we have this non-intrusive capability of
17:47 capturing relational change using Change Data Capture as I mentioned for Oracle
17:54 MySQL, MS SQL, and HP NonStop systems that’s unusual in the
18:00 streaming space to be able to do this
18:02 Change Data Capture is something that we do really really well so if all you want
18:06 to do is ingest data from the database in real time into Hadeoop or Kafka, you
18:12 can use our platform for that and if you do use our platform for that then you are well
18:16 set up to start thinking about maybe doing streaming analytics and
18:21 looking for issues and thresholds and other types of interesting patterns in
18:28 real time later
18:31 you want to do things like multi-stream correlation or enrichment of streaming data
18:35 there is context in our platform to do this really well
18:39 the other key thing that we provide these all of the processing in our platform is done for
18:46 this sequel like language so you don’t need to have Java developers or other
18:51 types of coders to enable you to build that is applications that means you
18:56 could rapidly build applications that solve your business problems without
19:01 having to do any coding you just install our platform with built-in applications and the
19:07 platform takes care of the rest for you
19:09 and having the built-in visualizations as well and really enhances the
19:16 experience and the ability to build for end-to-end applications and of course it
19:21 goes without saying that if you’re looking for a platform that can do all
19:24 of these things you need something that is enterprise strength and enterprise scale and
19:29 for a secure and overage the low-cost computing you may have whether its
19:34 actual service or virtual machines or cloud hosting solutions

<< PRODUCT DEMO >>

19:40 so with that, I’ll go into a brief demo of the product
19:48 the platform works utilizing these applications which are data flows and the
19:54 data flows are doing all the back-end processing of the raw data if we look at
19:59 a simple application
20:02 you see it’s like a directed graph flow of data from the top to the bottom
20:09 you can build these applications using the UI so you can drag and drop
20:15 for example the datasource into the UI configure it, say reading from files how
20:20 you want to parse the data with Apache Axis likes of the web blogs save this and the
20:26 editor component it
20:30 If you don’t like using UIs or you find it faster to build things using text
20:37 we do have a full scripting language
20:42 that enables you to build applications is just by writing a script and this
20:48 looks like a combination of DDL and SQL this is the exact representation
20:52 of this data flow you see here it’s a two-way street you can build applications in the
20:58 UI save it as text modifies it load it back into the UI the text files also very
21:04 useful for source control and for moving from dev to test our products etc
21:12 so if you look at this data flow you can see we started with the source and we have
21:17 some queries and intermediaries of the data streams so we look at a simple
21:21 query
21:23 this is a query that’s going to be running for every record read from
21:27 the data source the source of this case are some web blogs the web logs
21:31 represent every action happening on a website for a company that sells wearable
21:39 products
21:40 so each log entry is something that a user has done and you can see here we
21:46 capture the IP address we doing all this manipulation of the incoming data to put
21:52 into right data types or do some date parsing or looking for regular expressions in the
21:57 incoming data under certain circumstances we may have
22:02 a product ID in the URL people have clicked on people in searching things
22:08 etc and so all of this data is being read from the source and then it’s going
22:15 into a data stream
22:16 this is a prepared data stream it now has field names to make it easier to
22:21 query if I was a customer application typically what we seeing customers they
22:29 will build these data streams and do some initial preparation maybe even some
22:33 initial joins with additional context of the incoming raw data and then that
22:38 stream can then be repurposed in a number of different applications to
22:43 start off with a stream for a particular use case then find the data stream has
22:48 other uses
22:49 maybe by joining that stream with other streams you get even more valuable
22:52 information
22:54 this application which doing three different things with the same data stream this
22:57 tree we are doing some aggregation on that so it’s taking the raw stream
23:04 and chunking it into fifteen minutes and then we’re calculating some
23:10 aggregates on that so I can send average and grouping it by a number of different
23:14 dimensions and those dimensions will allow us to slice and dice if they don’t
23:20 buy those dimensions later on
23:26 but that raw data may not be enough so as these aggregates are coming through
23:30 we are utilizing in memory grid that we’ve loaded product information
23:36 into and we enable you to enrich the data as it’s flowing through with this
23:45 additional data and what does that look like in SQL, it’s to join so you just do simple
23:51 join everything from the stream everything from the look up where the
23:54 product keys match
23:57 and so now the data of the you have going through is the aggregate so we
24:02 produced before and it has these additional information from the lookup
24:08 also added to stream
24:11 and all of that is just being stored in internal results stores
24:16 the other processing that we doing in here for example we doing a check on
24:23 users so we’re looking at users on a moving basis within any five-minute period of
24:30 any users that have a response time for website ahead on that page takes to come
24:35 back more than two seconds more than five times
24:41 and if we find those people then we will not just take that fact that we’ve seen
24:48 that user will take everything they were doing in the window at that time and
24:52 join it with information that we’ve loaded in context information for the
24:56 user and information about the product and all of that is being written to a
25:00 file
25:02 the final thing that we doing here is we’re looking for people that may
25:06 potentially abandon the shopping cart and this is where we’re utilizing some
25:11 complex event processing type of functionality so we’re looking for a
25:14 sequence of events over time where the user is first browsing or searching for
25:21 things on the web site
25:22 results please log entries then they add things to a shopping cart they do all of
25:28 that three times and then they go back to browsing and looking at things if
25:34 they do that then we would have liked them someone that might potentially
25:36 abandon the shopping cart and again in a case where in enriching them with user
25:41 information and writing into a store
25:45 so that’s a basic application built using a product if I want to run this
25:51 first thing you need to do is deploy it when you deploy it is very flexible you can
25:55 deploy on one node, you can deploy everywhere in a cluster you can deploy bits of the
26:01 application on different parts of the cluster and that’s very useful if you
26:06 have certain
26:07 pieces of the cluster use for sourcing data others we wanted to processing of
26:12 his way on the store they they’re etcetera
26:15 when I deploy the application all of this definition that you seen here
26:19 becomes runtime objects if I now start this up is gonna start processing the data
26:27 so as the data flowing through you can take a look at this is the day the
26:31 following the rotating coming into this initial stream and we should look at the
26:36 data along as well so I can say I want to see what’s going on with the user’s
26:41 down here so I can preview this that this doesn’t happen that often but when
26:45 it does
26:46 you can see I have all of the information so this is a really good way of debugging your application
26:51 as you are going along and analyzing and looking actually what’s happening
26:57 so when you have this running application you something about building dashboards
27:01 so that’s what we built this application so we can take a look at
27:05 this
27:09 and this dashboard is built using the dashboard builder so
27:15 you can drag and drop any of these visualizations into the dashboard
27:19 for example I want a new pipe chart all of the visualizations are powered by a query
27:25 against the back end and so I’m gonna take an existing query
27:31 and the unique thing about this query here is this syntax which basically means
27:35 when you execute query go back fifteen minutes and then
27:41 get all the data from then till now and as you get new live data continue to push
27:46 it to the front end in real time that’s defined the query for the visualization
27:52 now you set up as a visualization uses the values from the queries and going to
27:57 use
28:00 a pie chart of hits that we have a number of times the page was hit by page
28:08 safe and we now have a new visualization on our dashboard
28:13 very simple dashboard that we’ve built for these demonstration purposes
28:19 there are more complex dashboards that we have built for customers
28:26 so I go back to my applications page
28:30 I will stop this application from running
28:36 deploy it and we have another application over here we built this and more
28:42 full-featured application is doing a lot more types of calculations and this is
28:48 actually monitoring financial transactions that may be occuring through a variety
28:53 of different sales points ATMs when it sells checks etc
29:01 if we take a look at the
29:02 dashboard for that one
29:05 you see it’s much richer, much more fully featured and this is not only monitoring
29:11 the types of transactions happening it’s actually in real time looking at the
29:16 number of declined transactions and any change in the decline rate that is too
29:23 dramatic will be flagged as an alert
29:28 and this application does allow you to drill down so I can drill down into a
29:34 particular location I can drill down overall what’s happening with the
29:40 transactions which he had them through are they debit transactions or ATM or credit
29:47 I can drill down by an individual state
29:56 and a variety of other ways of slicing and dicing the data
30:04 so thank you for attending
30:06 today hope you have really understood more about our products and
30:15 right now open for any questions you may have

CIO.com Webinar: 8 Priorities for Modernizing Your Data Integration and Analytics Strategy

CIO.com webinar on data integration and analytics

There has been great change and innovation in all things data. Big data. Streaming data integration. Fast analytics. Cloud processing and storage. IoT. (Just to name a few.) As CIO or head of data architecture or BI/analytics, it’s up to you to determine which data technologies will propel your business forward.

Philip Russom and Steve Wilkes on Data Integration and AnalyticsThe challenge has become less about determining what these technologies can do, and more about understanding how, and at what cost, these technologies can help you and your team deliver real business value.

What is needed is a framework of priorities to help organizations make sense of these technologies as they seek to modernize their data integration and analytics strategies.

CIO.com and Striim invite you to join Philip Russom, Director of TDWI Research, and Steve Wilkes, CTO and co-founder of Striim, for a half-hour, deep-dive webinar:

 

8 Priorities for Modernizing Your Data Integration and Analytics Strategy

Wednesday, April 13, 2016

10am PDT / 1pm EDT / 5pm UTC

 

Learn the areas of data integration and analytics that can deliver the greatest benefits to the business. For each priority, real-world customer use cases will be shared to explain how these objectives can be quickly and easily achieved.  

Register Now.

 

 

Is ETL Now a 4-Letter Word? Preparing for Streaming Analytics

The Bloor GroupThe time-tested process of Extract-Transform-Load (ETL) is fast losing its ability to cope with the volume, velocity and variety of Big Data coming down the pike. Forward-thinking companies are prepping the battlefield by designing on-ramps to the future of streaming analytics. Please join The Bloor Group and the Striim team for the live webinar:

The Bloor Group CEO, Eric Kavanagh hosts this episode of The Briefing Room, as Analyst Mark Madsen explains how a new era of data solutions is rising to the challenge of streaming data.

Tuesday, October 20th
1PM PDT / 4pm EDT / 8pm GMT
Register: WebEx Link

Steve Wilkes, Founder and CTO of the Striim platform will share how enterprises are turning to streaming data integration, in-memory transformations and continuous processing to achieve the goals of ETL in milliseconds – at a fraction of the cost and complexity of legacy systems. Several case studies will be shared.

How to Exploit Perishable Insights the Instant Your Data Is Generated

striim_blog_webinar_Forrester_Mike Guiltieri_Steve WilkesPlease join us to learn how collecting data the instant it’s born, and pre-processing it in memory (that is, pre-Kafka, pre-Hadoop, pre-disk), can help you tap into your gold mine of perishable insights.

Mike Gualtieri, Forrester Research Principal Analyst, is one of the world’s most renowned and respected analysts in the area of Streaming Analytics. He developed the de facto definition of Streaming Analytics over a year ago, and, with Forrester Research behind him, truly has his pulse on the evolution of this key enabling market.

We are honored to present with Mike as our guest speaker in our upcoming live webinar:

Data is Born Fast!
How to Exploit Perishable Insights the Instant Your Data Is Generated

Wednesday, Oct 7
11am PDT / 2pm EDT / 6pm GMT

As data volumes and variety increase, many enterprises are struggling to get the timely insights they need to take care of their customers and grow their business. Why?

  • All their data is getting dumped into a data lake without any filtering or organization, making it too costly to get the insights back out.
  • Understanding and collecting streaming data from many different sources –including databases, IoT devices, and the numerous different logs scattered around the enterprise – is difficult.
  • There aren’t enough hard-core developers available to wire together the myriad of commercial and/or open source technologies necessary to make sense of all of their data before it lands on disk. (And it still might not work at scale.)

Join Mike Gualtieri and Steve Wilkes, Founder/CTO of the Striim platform, to look at a new approach to Big Data Streaming Analytics.

https://youtu.be/U1KhWZhFhME

 

 

The Bloor Group Webinar: Time Difference: How Tomorrow’s Companies Will Outpace Today’s

Join The Bloor Group and WebAction in the Briefing Room for Time Difference: How Tomorrow’s Companies Will Outpace Today’s. In our increasingly interconnected world, the windows of opportunity for meaningful action are shrinking. Where hours once sufficed, minutes are now the norm. For some transactions, seconds make all the difference, even sub-seconds. Meeting these demands requires a new approach to information architecture, one that embraces the many innovations that are fundamentally changing the data-driven economy. WebAction delivers high-velocity Big Data analytics so you can quickly build tailored enterprise-scale applications.

Join us for the Briefing Room on February 10th from 1PM – 2PM (Pacific).

Key topics

  • Learn how a confluence of advances are changing the nature of data management
  • See how the WebAction realtime data platform leverages Big Data in concert with all manner of operational enterprise systems

Who should attend

  • CIOs and CTOs
  • Application architects
  • Data scientists and analysts

Featured Speakers

Inside Analysis How the Data Explosion Changes the Way We Do Business

The Bloor Group CEO Eric Kavanagh and WebAction EVP, Sami Akbay discuss big data trends that are drastically affecting business today in the latest webcast from The Bloor Group, Inside Analysis. They consider how the onslaught of new kinds of data (machine generated, social media and transactional, to name a few) have overwhelmed existing infrastructures, and the options businesses have to adapt and keep their customers happy. Learn how the WebAction Real-time App Platform processes all types of data in innovative ways in real-time.

Listen to this episode of Inside Analysis, How the Data Explosion Changes the Way We Do Business

About The Bloor Group

The Bloor Group provides detailed analysis of today’s enterprise software industry. Co-founded by Dr. Robin Bloor and Eric Kavanagh, the company leverages Web-based multimedia to deliver vendor-neutral education that is designed to reveal the essential characteristics of information technologies. The Bloor Group galvanizes the industry’s independent analysts to provide valuable insights via innovative Webcasts, articles, research programs and white papers.

 

Register Now for the TDWI Webinar “Real-Time Data, BI and Analytics”

tdwi_webinar_analystsSign up for the TDWI webinar on October 14, 2014 at 9:00 AM PT sharing research findings from TDWI’s new Best Practices report, Real-Time Data, BI, and Analytics: Accelerating Business to Leverage Customer Relations, Competitiveness, and Insights.

About the TDWI Webinar

Learn more about trends and drivers for real-time data, BI and Analytics. “Over three quarters of organizations surveyed consider real-time operations as an opportunity” ~ TDWI Report. The analysts will cover real-world use cases fueling the competitive edge behind organizations using real-time data. Learn the best real-time options poised for the greatest adoption in coming years.

Register for the TDWI Webinar

Learn More about the WebAction Real-time App Platform

 

 

Making Decisions While They Matter: The Importance of Real-time Big Data (GigaOM Webinar)

Learn about real-time data processing architectures. Join WebAction on Thursday, May 29th at 10am PDT for a GigaOM Research webinar: Making Decisions While They Matter: The Importance of Real-time Big Data (register here). A roundtable panel of experts includes WebAction co-founder, Alok Pareek, and GigaOM Research analysts David Loshin, and William McKnight. The panel will be moderated by Andrew Brust.

Here is the webinar description:

Data-driven decision-making is a popular and worthwhile goal, but few infrastructures are truly up to the task of supporting real-time data. The business needs driving analytics are evolving quickly, and even recent investments in complex systems can still fall short of the moving target. The scarcity of highly skilled data scientists can create logjams, batch tools cannot provide the real-time data business experts demand, new data sources are frequently siloed away from other enterprise applications, and the growth in volume of potential data inputs is outpacing the drop in storage prices that has supported it.

Building for the real-time future means embracing a flexible and robust architecture that can handle new data types and consumption models, with the scale they will require. This Webinar will examine what it takes to build and manage forward-looking real-time enterprise analytics, from ingestion through correlation, the processing itself, visualization, and distribution, plus all the governance that surrounds it.

Key topics of discussion

    • How are scale, complexity, and a changing user base affecting the requirements for modern, scalable real-time analytics?
    • What are the strengths and limitations of real-time and batch analytics?
    • What architectural concepts are critical as the enterprise embraces real-time analytics?
    • How should application developers approach real-time analytics?

Register now for Thursday, May 29th at 10am PDT.

http://research.gigaom.com/webinar/making-decisions-while-they-matter-the-importance-of-real-time-big-data/