Friday 24 February 2017

IoT and Data: Part 2

Daq
The second part is here, you didn't wait for too long, did you? This article will address other two problems we think IoT and DAQ will face in the future.

Problem 2: The average client utilizes three to five document sorts for undertakings. 

With such a large number of custom arrangements available, your present application likely includes an assortment of sellers to finish your assignment. Now and again these sellers oblige you to utilize shut programming that fares in a custom organization. Considered a typical agony point, collecting data from these numerous configurations regularly requires different instruments to peruse and examine the data. NI addresses this test with DataPlugins, which delineate record arrangement to the all inclusive TDM data display. At that point you can utilize a solitary device, for example, LabVIEW or DIAdem, to make examination schedules.

Problem 3: It takes too long to discover the data you have to investigate. 

The Aberdeen Master Data Management explore study met 122 organizations and asked to what extent it takes to discover the data they have to break down. They addressed five hours for every week! That is quite recently searching for the data—not investigating it. From a building point of view, this to me is not that stunning. What number of us have confronted what I consider to be the "clear VI disorder" for data? How would you even start to begin examining your data?
DataFinder records any metadata incorporated into the document, record name, or envelope chain of command of any record arrange. Once more, this depends on a very much archived document, however at this point I'm certain you have chosen to utilize TDM for your next application.
Once the metadata has been filed, you can perform questions—either message based, similar to you would in your most loved web crawler, or restrictive inquiries like in a database—to discover data in seconds. With this progressed questioning, you can return comes about at a channel level to track slants in individual channels from various documents after some time.

Wednesday 22 February 2017

IoT and Data: Part 1

Data acquisition system
With the ascent of the Industrial Internet of Things, one thing is clear: engineers need to separate important data from the gigantic measures of machine data gathered.
Data from machines, the quickest developing sort of data, is required to surpass 4.4 zettabytes (that is 21 zeros) by 2020. This sort of data is becoming speedier than online networking data and other conventional sources. This may sound amazing, yet when you consider those other data sources, which I call "human constrained," consider that there are just such a large number of tweets or pictures a man can transfer for the duration of the day. Furthermore, there are just such a large number of films or TV demonstrates a man can fling watch on Netflix to get to the following arrangement of proposals. However, machines can gather hundreds or even a great many signs all day, every day in a computerized form. In the exact not so distant future, the data produced by our more than 50 billion associated gadgets will effortlessly outperform the measure of data people create.
The data that machines create is one of a kind, and huge data investigation apparatuses that work for online networking data or conventional enormous data sources just won't cut it for designing data. That is the reason NI is putting resources into apparatuses to help you beat normal difficulties and settle on data-driven choices in light of your building data (regardless of the size) certainly.

Problem 1: 78% of data is undocumented. 

As indicated by research firm International Data Corporation (IDC), “The Internet of Things will also influence the massive amounts of ’useful data’—data that could be analyzed—in the digital universe. In 2013, only 22 percent of the information in the digital universe was considered useful data, but less than 5 percent of the useful data was actually analyzed.
Data that is viewed as valuable incorporates metadata or data that is labeled with extra data. Nobody needs to open a data source and ponder what the test was, what the channels of data are called, what units the data was gathered in, et cetera. NI is settling this issue with our Technical Data Management (TDM) data display. With it, you can include a boundless number of traits for a channel, a gathering of channels, or the whole document.
The second part will talk about problems (or challenges) 2 and 3. Don't miss it!

Monday 20 February 2017

Common Problems with LabVIEW Real-time Module: Part 2

labview expert
The second part of our series will address the difficulty with setting up a connection with a Compact Field Point and SQL Server 2000.
Let’s set up a possible scenario:
You have a SQL server with which you would like to communicate with directly (preferably no software in between).).
There is more than two way to try and solve this problem, but we’ve narrowed them down to two that are most likely to be a perfect solution:

1.  FTP files using cFP onto IIS FTP server (push data, then DTS).

This should be fairly easy to accomplish. As an alternative, you can write a LabVIEW app for your host computer (SQL Server Computer) that uses the Internet Toolkit to FTP the files off the cFP module, and writes the data from the file into the SQL Server using the SQL Toolkit. As another alternative, you can use DataSockets to read the file via FTP, parse it, and write the data to the SQL Server using the SQL Toolkit.

2. Write a custom driver/protocol (which will run on the cFP)

You can accomplish this, however, it is a subject to some limitations/difficulties One approach would be a modification of the first solution, where you create a host-side LabVIEW program that communicates with the cFP controller via a custom TCP protocol that you implement to retrieve data at specified intervals and log the data directly to the database.
How do you like solutions are LabVIEW experts are providing? Are you working on a LabVIEW project at the moment? Let us know in the comments.

Sunday 19 February 2017

Common Problems with LabVIEW Real-time Module

Labview freelance projects
This is the first part of the series where we address problems users occur with LabVIEW real-time module.
If the instability of Windows appears to be a concern, we recommend a fault tolerant design that could handle the Windows platform going down occasionally.
Here’s what we’re talking about
Three machines:
1) DB Server
2) DB Writer
3) RT Data collection
Notes:
1) DB Server of your choice. Preferably SQL based.
2) Responsible for pulling readings from RT and writing to DB. The buffer between two systems. More on this below.
3)RT Data Collection and short term storage. More below.
The DB writer acts as a buffer between the short term storage on the RT platform and the server. The data collected from the RT system will be coming in at a steady rate. The updates going to the DB should be viewed as being asynchronous.
Break RT app into two parts, Time Critical (TC) and other. The TC loop reads data and queues to the other loop. The other loop should read the queued data and write to LV2 style globals. These LV2 globals should maintain an array of the data. If the array exceeds some predetermined level, newest data goes to buffer file. This journaling to file will allow the Windows based DB writer to fall behind, re-boot etc.

Meanwhile, on the Windows platform...

DB Writer could periodically use a call by reference node to execute a read operation to the LV2 global written by the "other loop" on the RT platform. Read data is then used to write DB using SQL Toolkit or whatever.
The data collection rate will determine the amount of local disk storage you will need on the RT platform to handle buffering backlogs while the Windows app is down. The size of the cached array in LV2 global should be large enough to handle the non-periodic nature of the DB Writer's read operations. When the LV2 global on the RT platform is read it should return the contents of the cached buffer when a read operation is performed (by the RT Writer). If there are samples waiting in the RT's buffer file, a read of the oldest values should be read from the file and placed in the buffer waiting for the next read. The LV2 global should also return a boolean or similar flag that indicates more reading are waiting to be read.
We realize that this article does not tell you how to write to a DB from the RT platform but it does represent an architecture that will allow you to harness the stability of an RT based app while taking full advantage of the functionality that is already in place. Our LabVIEW experts will try to provide answers to your questions. Do you have any?

Wednesday 15 February 2017

The LabVIEW Real-Time Module

professional labview expert
As you already know, ReadyDAQ is developing a program for real-time systems. ReadyDAQ for real-time will be based on the LabVIEW Real-Time Module which is a solution for creating reliable, stand-alone embedded systems with a graphical programming approach. In other words, it is an additional tool to the already existing LabVIEW development environment. This module helps you develop and debug graphical applications that you can download to and execute on embedded hardware devices such as CompactRIO, CompactDAQ, PXI, vision systems, or third-party PCs.
Why should you consider real-time module? Well, there are three advantages that will change your mind:

1. Stretch out LabVIEW Graphical Programming to Stand-Alone Embedded Systems 

LabVIEW Real-Time incorporates worked in builds for multithreading and real-time string planning to help you productively compose strong, deterministic code. Graphically program remain solitary frameworks to run dependably for developed periods. ReadyDAQ Real-time has utilized this choice splendidly and it is actualized in the arrangement we offer.

2. Exploit a Real-Time OS for Precise Timing and High Reliability 

Universally useful OSs are not enhanced to run basic applications with strict planning necessities. LabVIEW Real-Time underpins NI installed equipment that runs either the NI Linux Real-Time, VxWorks, or Phar Lap ETS real-time OS (RTOS).

3. Utilize a Wide Variety of IP and Real-Time Hardware Drivers 

Utilize several prewritten LabVIEW libraries, similar to PID control and FFT, in your remain solitary frameworks. Real-time equipment drivers and LabVIEW APIs are likewise accommodated most NI I/O modules, empowering deterministic data obtaining.
According to the points made above, you realize that real-time module can only bring benefit for you and your company. In the upcoming weeks, you can read about common problems user experience using LabVIEW Real-time module as well as solutions to those problems from our professional LabVIEW experts.

Tuesday 14 February 2017

Big Data About Real Time - Part 1

temperature data logger
The data distribution center, as profitable as it seems to be, is history. The most significant data will be what is gathered and investigated amid the client collaboration, not the audit a while later.
It's unmistakable there's a change in big business data dealing with in progress. This was clear among the enormous data devotees going to the Hadoop Summit, in San Jose, Calif., and the Spark Summit in San Francisco prior this month.
One period of this change is in the size of the data being collected, as profitable "machine data" heaps up quicker than sawdust in a wood process. Another stage, one that is less every now and again examined, is the development of data toward close real-time utilize.
The investigation that numbers are not the consequences of the most recent three months or even the most recent three days, however, the most recent 30 seconds - presumably less.
In the computerized economy, communications will happen in close real-time. Data investigation should have the capacity to keep up. Hadoop and its initial implementers, for example, Cloudera and Hortonworks, have ascended to conspicuousness in light of their authority of scale. They eat data at a gigantic rate, one that was unfathomable a couple of years prior.
"We see 50 billion machines connected to the Internet in five to ten years," said Vince Campisi, CIO of GE Software, at the Hadoop Summit. "We see a significant convergence of the physical and digital world."
The merging of the physical operation of wind turbines and stream motors with machine data implies the physical question gets a virtual partner. Its reality is caught as sensor data and put away in the database. At the point when an investigation is connected, its reality there can go up against its very own existence, and the framework can anticipate when parts will separate and make real-life operations come to a standstill.
Be that as it may, Davenport's framework of the change was fragmented. It did exclude the component of quickness, of close real-time, comes about required as data is investigated. It's that quickness component that IBM was following up on as it issued its ringing support of Apache Spark.
The start is the new child on the piece, an in-memory framework that is not precisely obscure but rather is still an outsider in data distribution center circles. IBM said it would empty assets into Spark, an Apache Foundation open source extend.
"IBM will offer Apache Spark as a service on Bluemix, commit 3,500 researchers to work on Spark-related projects, donate IBM SystemML to the Spark ecosystem, and offer courses to train 1 million data scientists and engineers to use Spark," wrote InformationWeek's William Terdoslavich after IBM's announcement.
Stay tuned for the part two and find out about big data and their plans with real-time data.

Real-time Data Acquisition

data acquisition system

What is RTD?

Real-time data (RTD) is information that is distributed directly after it has been gathered. There is absolutely no delay in the timeliness of the data delivered. Real-time data is often used for navigation or tracking.
Some uses of the term "real-time data" confuse it with the term dynamic data. The presence of real-time data is actually irrelevant to whether it is dynamic or static. Real-time analytics is the use of, or the capacity to use, data and related resources as soon as the data enters the system.  The adjective real-time refers to a level of computer responsiveness that a user senses as immediate or nearly immediate. Real-time analytics is also known as dynamic analysis, real-time analysis, real-time data integration and real-time intelligence.

Technologies that support real-time analytics include:

  • Processing in memory (PIM) --  a chip architecture in which the processor is integrated into a memory chip to reduce latency. 
  • In-database analytics -- a technology that allows data processing to be conducted within the database by building analytic logic into the database itself. 
  • Data warehouse appliances -- combination hardware and software products designed specifically for analytical processing. An appliance allows the purchaser to deploy a high-performance data warehouse right out of the box. 
  • In-memory analytics -- an approach to querying data when it resides in random access memory (RAM), as opposed to querying data that is stored on physical disks.
  • Massively parallel programming (MPP) -- the coordinated processing of a program by multiple processors that work on different parts of the program, with each processor using its own operating system and memory.

Why are we talking about this?

ReadyDAQ is preparing a product for real-time data acquisition. In the following weeks, we will describe in detail the pros and cons of RTD, how does it work and when should you use it. Stay tuned for the articles that are about to come, you'll love them!