Monday 18 December 2017

Engineers Turn to Automated Test Equipment to Save Time

http://www.readydaq.com/content/blog/engineers-turn-automated-test-equipment-save-time
With engineers rushing tests in order to hit tight product deadlines, the market for test equipment that automatically detects faults in semiconductors and other components is growing.
Setting aside time for testing has been a struggle for electrical engineers. The shrinking size - and increasing complexity - of semiconductor circuits is not making life any easier. Nearly 15% of wireless engineers are outsourcing final testing and more than 45% contract manufacturing – when most semiconductor testing takes place.
Almost 65% of the survey respondents said that testing is still a challenge in terms of time consumption. New chips designed for tiny connected sensors and autonomous cars also require rigorous testing to ensure reliability.
Tight deadlines for delivering new products is forcing engineers toward using automated test equipment, also known as ATE, to quickly identify defects in semiconductors, especially those used in smartphones, communication devices, and consumer electronics.
The global automated test equipment market is estimated to reach $4.36 billion in 2018, up from $3.54 billion in 2011, according to Transparency Market Research, a technology research firm.
Automated test equipment is used extensively in semiconductor manufacturing, where integrated circuits on a silicon chip must be tested before it is prepared for packaging. It cuts down on the time it takes to test more complex chips, which are incorporating higher speeds, performance, and pin counts. Automatic testing also helps to locate flaws in system-on-chips, or SoCs, which often contain analog, mixed-signal, and wireless parts on the same silicon chip.


Saturday 16 December 2017

Semiconductor Testing


http://www.readydaq.com/content/blog/semiconductor-testing

Automated test equipment (ATE) is computer-controlled test and measurement equipment that allows for testing with minimal human interaction. The tested devices are referred to as a device under test (DUT). The advantages of this kind of testing include reducing testing time, repeatability, and cost efficiency in high volume. The chief disadvantages are the upfront costs of programming and setup.
Automated test equipment can test printed circuit boards, interconnections, and verifications. They are commonly used in wireless communication and radar. Simple ATEs include volt-ohm meters that measure resistance and voltages in PCs; complex ATE systems have several mechanisms that automatically run high-level electronic diagnostics.
ATE is used to quickly confirm whether a DUT works and to find defects. When the first out-of-tolerance value is detected, the testing stops and the device fails.

Semiconductor Testing

For ATEs that test semiconductors, the architecture consists of a master controller (a computer) that synchronizes one or more sources and capture instruments, such as an industrial PC or mass interconnect. The DUT is physically connected to the ATE by a machine called a handler, or prober, and through a customized Interface Test Adapter (ITA) that adapts the ATE's resources to the DUT.
When testing packaged parts or directly on the silicon wafer, a handler is used to place the device on a customized interface board and silicon wafers are tested directly with high precision probes.

Test Types

Logic Testing

Logic test systems are designed to test microprocessors, gate arrays, ASICs and other logic devices.
Linear or mixed signal equipment tests components such as analog-to-digital converters (ADCs), digital-to-analog converters (DACs), comparators, track-and-hold amplifiers, and video products. These components incorporate features such as, audio interfaces, signal processing functions, and high-speed transceivers.
Passive component ATEs test passive components including capacitors, resistors, inductors, etc. Typically, testing is done by the application of a test current.
Discrete ATEs test active components including transistors, diodes, MOSFETs, regulators, TRIACS, Zeners, SCRs, and JFETs.

Printed Circuit Board Testing

Printed circuit board testers include manufacturing defect analyzers, in-circuit testers, and functional analyzers.
Automated Test Equipment imageManufacturing defect analyzers (MDAs) detect manufacturing defects, such as shorts and missing components, but can't test digital ICs as they test with the DUT powered down (cold). As a result, they assume the ICs are functional. MDAs are much less expensive than other test options and are also referred to as analog circuit testers.
In-circuit analyzers test components that are part of a board assembly. The components under test are "in a circuit." The DUT is powered up (hot). In-circuit testers are very powerful but are limited due to the high density of tracks and components in most current designs. The pins for contact must be placed very accurately in order to make good contact. They are also referred to as digital circuit testers or ICT.
A functional test simulates an operating environment and tests a board against its functional specification. Functional automatic test equipment (FATE) unpopular due to the equipment not being able to keep up with the increasing speed of boards. This causes a lag between the board under test and the manufacturing process. There are several types of functional test equipment and they may also be referred to as emulators.

Interconnection and Verification Testing

Test types for interconnection and verification include cable and harness testers and bare-board testers.
Cable and harness testers are used to detect opens (missing connections), shorts (open connections) and miswires (wrong pins) on cable harnesses, distribution panels, wiring looms, flexible circuits, and membrane switch panels with commonly-used connector configurations. Other tests performed by automated test equipment include resistance and hipot tests.
Bare board automated test equipment is used to detect the completeness of a PCB circuit before assembly and wave solder.

Wednesday 6 December 2017

Exploiting LabVIEW Libraries


labview expert
Have you ever viewed a LabVIEW VI Hierarchy and become frustrated with not being able to locate a VI you needed to open?
Do you have large applications composed of similar modules but fear to jump, with both feet, into the learning curve of LVOOP?
Did you ever try to duplicate a sub-VI at the start of a new set of functions and find yourself deep in a nest of cross-linked VIs, or save a VI only to realize that the most suitable name has already been used?
Then using LabVIEW Libraries may be useful to you
Libraries are a feature available in the LabVIEW project or they can be created stand-alone*. They have a number of features that allow you to specify shared properties and attributes of related VIs and custom controls.
In short, many of the features of LVOOP are available without the complications required for Dynamic Dispatching. The remainder of this document will serve as a tutorial that demonstrates how to create, define, and clone a library. Additional notes are included to illustrate how these features can be exploited to help you develop more robust applications that are easier to support than applications that do not use libraries.
*Libraries can be created stand-alone from the LabVIEW splash screen using the method:
File >>> New … >>> Other Files >>> Library
You can create a new library from the project by right-clicking the “My Computer” icon and selecting “New >>> Library”. Save it to a unique folder that will contain all of the files associated with the library.
Open the properties screen and then open the icon editor) to compose a Common Icon for the library and its members.
Take a little time to create the icon because it will be shared by all of the members of the library. Do not get carried away and fill-up the entire icon. Leave some white space so that the icons of the component VIs can be customized to illustrate their role in the functionality of the library.
Create virtual folders in the library to help organize the VIs contained in it. I usually use three folders but you can use more or less depending on your needs and preferences. I use one to hold the controls, and another pair for the public and private VIs. I do not use auto-populating folders for a number of reasons.
I can control which VIs are included and which are not. Occasionally temporary VIs are created to do some basic testing and they are never intended to be part of the library. If functionality changes and the temporary VI breaks due to the change, the library may cause a build to fail due to the broken VI.
I can easily move a VI from private to public without having to move the VI on disk and then properly updating source code control to reflect the change.
I can keep the file paths shorter using the virtual folders while maintaining the structure of the project.
Additional virtual folders can be added if you want to further break-down the organization of the VIs in the library. If developing a library that will be used by other developers and or be as a tool for others, you may want to include a folder for the VIs that define the API your library offers. The API can also be divided into additional virtual folders to break-down the interface into functional areas if you wish. Implement the Logical Grouping of sub-VIs as needed for your library.
Set the Access Scope of the private virtual folder to private. While the private folder and the setting of the access scope can be optional, taking advantage of this options will help you and the users of your library identify which VIs are not intended for use outside of the library. Attempting to use a VI with a private scope from outside the library itself will break the calling VI and make it very obvious that the VI is not intended for public use.
Developing applications using libraries differs little from developing without libraries with one exception; there is no additional work to use them. The exception is illustrated in Figure 8 where the name of the VI is highlighted. While the VI named in the project is shown as “Init_AI.vi” the actual name of the VI is “DAQ.lvib:AI.lvlib:Init_AI.vi”. The difference is the result of what is called “Name Mangling”. The actual name of the VI is prefixed by the library names that own the VI. This is a powerful feature that goes a long way toward avoiding cross-linking and will let us easily clone a library to be used as the starting point of a similar library.
The Save as the screen for the library will not only let us define the library name but also where in the project the library will be placed. This is handy for nested libraries but not critical. The libraries can be moved around in the project or between libraries as need using the project window. When a library is cloned using the Save As an option, all of the VIs contained in the original library are duplicated and re-linked to the VIs in the new library. There is NO chance of cross-linking when Cloning a library!
Libraries can help in all phases of an application from initial development to long-term support through to knowledge transfer. Remember, “Libraries” are your friend!

LabVIEW Improvements


labview developers

LabVIEW passed its 30 year anniversary in 2016,  and six months ago, National Instruments, has launched a considerably updated version of LabVIEW - its Next Generation LabVIEW NXG 1.0.
LabVIEW NXG is a totally reworked version of LabVIEW and this enables it to offer a considerably improved level of performance. By adopting an approach where LabVIEW has been started again from the ground up, LabVIEW NXG enables users to see significant improvements in performance as a result of the new code.
LabVIEW NXG offers some significant definitive improvements over the previous implementation of LabVIEW:
  • Plug & Play: a lot of work has gone into enabling LabVIEW NXG to provide easy set-up of hardware connections. It has true plug and play functionality.
  • IDE: The LabVIEW NXG environment has been totally overhauled to take elements of popular commercial software and replicate the attributes of the environment to make it more intuitive.
  • Tutorials: To facilitate the speedy uptake of newcomers to LabVIEW, the new LabVIEW NXG has inbuilt walk-throughs and other integrated learning facilities. This has been shown to greatly speed up the time which it takes for newcomers to be able to proficiently programme in LabVIEW. It is even possible to undertake a number of standard tasks without “hitting the code.”
National Instruments will be running both the traditional LabVIEW, i.e. LabVIEW 2017 which has also been launched alongside the new next-generation LabVIEW NXG, but ultimately when total compatibility has been established the two will converge enabling users to benefit from the new streamlined core.
Users of LabVIEW will be given access to both LabVIEW 2017 and later versions as well as LabVIEW NXG. In this way, they can make the choice of which version suits their application best.
National Instruments spokespeople stressed that the traditional development line of LabVIEW will continue to be maintained so that the large investment in software and applications that users have is not at risk. However, drivers and many other areas are already compatible with both lines.
“Thirty years ago, we released the original version of LabVIEW, designed to help engineers automate their measurement systems without having to learn the esoterica of traditional programming languages. LabVIEW was the ‘nonprogramming’ way to automate a measurement system,” said Jeff Kodosky, NI co-founder and business and technology fellow, known as the ‘Father of LabVIEW.’
“For a long time, we focused on making additional things possible with LabVIEW, rather than furthering the goal of helping engineers automate measurements quickly and easily. Now we are squarely addressing this with the introduction of LabVIEW NXG, which we designed from the ground up to embrace a streamlined workflow. Common applications can use a simple configuration-based approach, while more complex applications can use the full open-ended graphical programming capability of the LabVIEW language, G.”

Monday 20 November 2017

9 Things to Consider When Choosing Automated Test Equipment


automation

Automated test equipment (ATE) have the ability to reduce the costs of testing and make sure that lab teams can focus on other, more important tasks. With ATE, productivity, and efficiency is boosted to a maximum level due to cutting out the unnecessary tasks and daily activities.
However, you should not just cash out and invest in automated test equipment, there are elements that factors that are important to find the system that suits you best. Our team at ReadyDAQ has prepared 12 things you should consider before choosing automated test equipment.

1. Endurance and Compactness

One of the most important things is that the ATE system your company picks is designed for optimal performance over the long-term. Take a careful look at connections and components and make a conclusion whether they will survive over repeated use.Many lab teams are often struggling to find large areas for their testing operations. The automated test equipment should also be compact.

2. Customer Experience

Are other customers satisfied the support and other things they went through? Does the company you bought ATE from provide full support? You don't have to be the expert in automated test equipment, but they do. And their skills and expertise have to be available to you for when you need it. Customer support and the overall customer experience is a huge factor!

3. Scalability and Compatibility

One purchase does not have to be final. It often isn't You should check whether the equipment you ordered can be expanded or scaled over time. Your needs might change and you want ATE to adapt to your needs.
When compatibility comes to mind, we want to make sure that the equipment is built following all industry standards. Cross-compatibility is often important in situations where we no longer need or have lost the access to certain products. Better safe than sorry.

4. Comprehensive

Think of all the elements needed for testing. Even better, make a list. Does the equipment you have in mind cover ALL required elements? Don't forget about power and signaling, are they included too?

5. High Test Coverage and Diagnostics 

The ATE system has to be able to provide full coverage and give insights on all components of the processed product. This can help decrease the number of possible errors and failures later on.
How about diagnostics? Does the testing system provide robust diagnostic tools to make sure the obtained results are reliable and true?

6. Cost per Test

How much does a single test cost? You have to think and plan long-term, so a single test cost can help you calculate and make an assumption whether the system provides real value for the money invested.

7. Testimonials and Warranty 

Are other customers satisfied? Can the company direct you to testimonials from previous customers? What do their previous customers have to say about the systems and their performance?
Also, you don't want to be left hanging in case the systems starts malfunctioning or simply stops working. Does the ATE system come with a comprehensive warranty? Make sure you’re protected against damages that might happen in testing and see that the warranty covers that too.

8. Manufacturer Reputation

When did you first hear about the company? How? Did someone (besides them) say anything good about them? Is the company known for the high quality of their equipment? Discuss their past projects and learn more about the value their products provide.

9. Intuitive Performance

At first sight, is the system easy to use or way too complicated that it would require weeks of training for everyone in the lab? Does it offer intuitive performance within the testing procedure? Your team should be able to begin testing without having to go over every point in the technical manual in pinpoint detail.
Our team at ReadyDAQ is here to help you select the perfect automated test equipment for your lab.

Wednesday 30 August 2017

IoT: Standards, Legal Rights; Economy and Development

labview developers

It is safe to say that, at this point, the fragmented nature of IoT will hinder, or even discourage the value for users and industry. If IoT products happen to be poorly integrated, or inflexible regarding connectivity or are complex to operate, these factors can drive users as well as developers away from IoT. Also, poorly designed IoT devices can have negative consequences for the networks they connect to. Therefore, standardization is a logical next step as it can bring appropriate standards, models, and best practices. The standardization can, in turn, bring about user benefits, innovation and economic benefits.
 
Moreover, a widespread use of IoT devices brings about many regulatory and legal issues. Yet, since IoT technology is rapidly changing, many regulatory bodies cannot keep up with the change, so these bodies also need to adapt to the volatility of IoT technologies. But one of the issues which frequently comes in action is what to do when IoT devices collect data in one jurisdiction and transmit it to another jurisdiction with, for example, more lenient laws for data protection. Also, the data collected from IoT devices are often times liable to misuse, potentially causing legal issues for some users.
 
Other burning legal issues are the conflict between lawful surveillance and civil rights; data retention and ‘the right to be forgotten’; and legal liability for unaware users. Although the challenges are many in number and great in scope, IoT needs laws and regulations which protect the user and their rights but also do not stand in the way of innovation and trust.
 
Finally, Internet of Things can bring great and numerous benefits to developing countries and economies. Many areas can be improved through IoT: agriculture, healthcare, industry, to name a few. IoT can offer a connected ‘smart’ world and link aspects of people’s everyday lives into one huge web. IoT affects everything around it, but the risks, promises and possible outcomes need to be talked about and debated if one is to pick the most effective ways to go forward.

Tuesday 29 August 2017

IoT: Security and Privacy


data logger

Two key IoT issues, which are also intertwined, are security and privacy: the data IoT devices store and work with needs to be safe from hackers, so as not to have sensitive data exposed to third parties. It is of utmost importance that IoT devices be secure from vulnerabilities and private so that users would feel safe in their surroundings and trust that their data shall not be exposed or worse, sold through illegal channels. Also, since devices are becoming more and more integrated into our everyday lives (many people store their credentials on their devices, for example), poorly secured devices can serve as entry points for cyber-attacks and leave data unprotected.
 
The nature of IoT devices means that every unsecured or inadequately secured devices pose a potential threat. This security problem is even deeper since various devices can connect to each other automatically, thus refraining the user from knowing at first glance whether a security issue exists. Therefore, developers and users of IoT devices have an obligation to make sure that no other devices come in any potential harm, so they constantly develop and test security solutions for these challenges.
 
The second key issue, privacy, is thought to be a factor which holds back the full development and implementation of IoT. Many users are concerned about their rights when it comes to their data being tracked, collected and analyzed. IoT also raises concerns regarding the potential threat of being tracked, the inability of discarding certain data collection, surveillance etc. Strategies need to be implemented which, although bring innovation, still respect user privacy choices and expectations. In order for Internet of Things to truly be accepted, these challenges need to be looked into and these problems need to be overcome, which is a great task and a test both for developers and for users.

Saturday 26 August 2017

IoT: Summary

data logging
The Internet of Things (or shortened ‘IoT’) is a hot topic in today’s world which carries extraordinary significance in socio-economic and technical aspects of everyday life. Products designed for consumers, long-lasting goods, automobiles and other vehicles, sensors, utilities and other everyday objects are able to become connected among themselves through the Internet and strong data analytic capabilities and therefore transform our surroundings. Internet of Things is forecast to have an enormous impact on the economy; some analysts anticipate almost 100 billion interconnected IoT devices. On the other hand, other analysts proclaim that IoT devices shall input into the global economy more than $11 trillion by 2025.
However, the Internet of Things comes with many important challenges which, if not overcome, could diminish or even put a stop to the progress of it thus failing to realize all its potential advantages. One of the greatest challenges is security: the newspapers are filled with headlines alerting the public to the dangers of hacking internet-connected devices, identity theft and privacy intrusion. These technical and security challenges remain and are constantly changing and developing; at the same time, new legal policies are emerging.
This document’s purpose is to help the Internet Society community find their way in the discourse about the Internet of Things regarding its pitfalls, shortcomings and promises.
Many broad ideas and complex thoughts surround the Internet of Things and in order to find one’s way, the key concepts that should be looked into as they represent the foundation of circumstances and problems of IoT are:
- Transformational Potential: If IoT takes off, a potential outcome of it would be a ‘hyperconnected world’ where limitations on applications or services that use technology cease to exist.
- IoT Definitions: although there is not one universal definition, the term Internet of Things basically refers to several connected objects, sensors or items (not considered computers) which create, exchange and control data with next to none human intervention.
- Enabling Technologies: Cloud computing, data analytics, connectivity and networking all lead to the ability to combine and interconnect computers, sensors and networks all in order to control other devices.
- Connectivity Models: There are four common communication models and are as following: Device-to-Device, Device-to-Cloud, Device-to-Gateway, and finally Back-End Data-Sharing. These models show how flexible IoT devices can be when connecting and when providing value to their respective users.

Thursday 24 August 2017

What is RS-232, what is RS-422, and what is RS-485?

automation
RS-232, RS-422 and RS-485 are serial connections which can be found in various consumer electronics devices. Namely, RS-232 (ANSI/EIA-232 Standard) is the serial connection which can be historically found on IBM-compatible PCs. It is employed in many different scenarios and for many purposes, such as connecting a mouse, a printer, or a modem, as well as connecting different industrial instrumentation. Due to improvements in line drivers and cables, applications often expand the performance of RS-232 beyond the distance and speed limits which are listed in the standard. RS-232 is restricted to point-to-point connections between PC serial ports and various other devices. RS-232 hardware can be employed for serial communication up to distances of 50 feet.
On the other hand, RS-422 (EIA RS-422-A Standard) is the serial connection which can be historically found on Apple Macintosh computers. RS-422 employs a differential electrical signal, as opposed to unbalanced signals referenced to ground with the RS-232. Differential transmission employs two lines each for transmitting and receiving signals which lead to greater immunity to noise and the signal can travel longer distances as compared to the RS-232. These advantages make RS-422 a better option to consider for industrial applications.
Finally, RS-485 (EIA-485 Standard) is an improvement over RS-422, because it increases the number of devices from 10 to 32 and defines the electrical features necessary to safeguard adequate signal voltages under maximum capacity. With this enhanced multi-drop capability, one is able to create networks of devices connected to a single RS-485 serial port. The noise immunity and multi-drop capability make RS-485 the serial connection of choice in industrial applications requiring many distributed devices networked to a PC or other controller for data collection, HMI, or other operations. RS-485 is a superset of RS-422; therefore, all RS-422 devices can be controlled by RS-485. RS-485 hardware can be employed for serial communication with up to 4000 feet of cable network.

Tuesday 22 August 2017

Requirements of real time control

automation
Real time embedded control processors are individual computing units which have been implemented into pieces of larger and far more complicated equipment such as vehicles of all sort (trucks, airplanes, boats, yachts etc.), then other computer peripherals, audio systems and military equipment and weapons. The control processors are said to be embedded because they are integrated into a piece of equipment which is not in itself considered a computer nor does it execute some computing functions.

Requirements of real time control

Whether they are invisible or visible to the user, the real-time control processors are nowadays widespread and incorporated into people’s daily life and actions. For example, an invisible real-time control processor can be found in vehicles: this is the ABS (automatic braking system) which holds the vehicle steady on the road and prevents it from skidding on the road. Also, a real-time control processor can be used to replace high cost, high maintenance and bulky components of a given system, while at the same time providing better functions at a lower expense. In other certain occurrences, the presence of a real-time control processor may be visible, for example, an autopilot on an aircraft. But in all aforementioned cases, this real-time control processor is still a part of a larger system. And because of the fact that it is a component of a greater system, and that system has its own requirements and operating capabilities, most of these systems limit the processor in regard to its size, then weight, cost, power or reliability. Simultaneously though, the real-time control processor is bound to deliver top performance, for these real time events are mostly external inputs to the system which is in need of a response within milliseconds. If the processor fails to deliver a response in such short time span, disaster may strike: the autopilot may not change the course of the aircraft accordingly and may misinform the pilot about altitude.

Sunday 20 August 2017

LabVIEW Projects you should Know


labview
STÄUBLI LABVIEW INTEGRATION LIBRARY
The DSM LabVIEW-Stäubli Control Library is created to simplify communications between a host PC running LabVIEW and a Stäubli robotic motion controller so as to control the robot from the LabVIEW environment. 
Stäubli Robots are usually found in the automation industry. The standard Staubli programming language, VAL3, is an adjustable language allowing for a wide variety of tasking. Although the VAL3 language works well in its environment, there are limited options for connecting the robot to an existing PC-based test & measurement system. The LabVIEW language, on the other hand, has been created from the start to run systems found in a research environment. The DSM LabVIEW-Staubli Integration Library lets the user promptly create applications for a Staubli robot using the familiar LabVIEW programming language.
 
 
AUTOMATED CRYOGENIC TEST STATION
A test station was built with the intent in mind to automate cyclic cryogenic exposure.  A LabVIEW program was inserted to automate the process and collect data. The software featured:
Checked the temperature of up to 8 thermocouples
Checked the life status of test specimens twice per cycle
Automated backups to allow for data recovery
System was integrated with a pneumatic control board and safety features
 
 
TENSILE TESTER CONTROL PROGRAM
This system is able to record high-resolution x-ray imagery of test subjects of aerospace alloys while they are under tensile and cyclic fatigue tests.  This capability can improve understanding of how grain refinement is used to enhance material properties.  The tensile tester can function in multiple modes of operation. The sample can be fully rotated within the tester, permitting three-dimensional imagery of samples.
 
DYNAMOMETER TEST STATION
A test station designed to characterize piezoelectric motors was built, with programmable current source and a DC motor integrated into the system to apply a range of resistive torque loads to the tested motor.  A torque load cell and a high-resolution encoder were used to measure torque and speed, which is collected at each resistive torque level, forming a torque curve. A LabVIEW project was programmed to run the test. Test settings were configured in the program and data was collected by an NI DAQ card. The program also included data manipulation and analysis.
 
OPEN-LOOP ACTUATOR CONTROLLER
The goal is to characterize the actuator's performance in open-loop so that a closed-loop control scheme can be developed. This program can output voltage waveforms as well as voltage steps up to 40V. Voltage duration is programmable down to the millisecond and an encoder is integrated into the system and readings are real time. The encoder features resolution on a micron level and experiences exceptional noise due to the vibrations present in the system. The data is filtered after the test to report accurate, low-noise data.

Thursday 17 August 2017

I²C and SPI


data logging
Nowadays, at the low end of the transmission protocols, I²C (for ‘Inter-Integrated Circuit’, protocol) and SPI (for ‘Serial Peripheral Interface’) are to be found. Both protocols are well-suited for transmissions betwixt unified circuits, for slow transmission with onboard components. At the essence of these two popular protocols two major companies are found – Philips for I²C and Motorola for SPI – and two diverse histories about why, when and how the protocols were generated.
The I²C bus was developed in 1982; its authentic purpose was to supply an effortless way to link a CPU to peripherals chips in a TV set. Peripheral instruments in embedded systems are frequently connected to the microcontroller as memory-mapped I/O mechanisms. One straightforward way to do this is connecting the components to the microcontroller parallel address and data busses. This results in countless wiring on the PCB (printed circuit board) and supplementary ‘glue logic’ to decode the address bus on which all the peripherals are connected. To reserve microcontroller pins, further logic and make the PCBs simpler, in order words, to lower the expenditure.
SPI is a single-master communication protocol. This means that one fundamental device initiates all the communications with the servants.

About Temperature Data Loggers

http://www.readydaq.com/temperature-data-logger
A data logger is, simply put, an electronic device which records and stores data. There are various ways data devices, or data loggers, tools designed for recording or monitoring processes and different parameters, acquire data. These data loggers have become a revolutionary solution for logging vast amounts of data and are nowadays symbolized by a vast array of devices, from small, handheld ones to complex systems. For example, a data logger device can be applied to automobile and other vehicle control, then the acquisition of machine or engine data and monitoring of conditions present in a machine. Multichannel systems which track vibration, force detection and various measurements in turbines and generators can all be found. The findings are later presented as charts, graphs and diagrams.

Temperature data logger

Temperature data loggers, also called temperature monitoring devices, can be found with ease, and they offer a variety of solutions to adapt to any temperature measurement scenario. Data loggers which measure atmospheric temperature almost always have a built-in sensor which is then employed to measure surrounding temperatures in rooms, fridges or other enclosed spaces. Needless to say, these instruments are capable of autonomous work, that is, they record temperatures over a defined period, without the need of a person meddling with it.
There are many various constructions available for data logging devices. Most of these devices have internal measuring sensors or can be linked to external sources. Also, most of these devices can be connected to via cord, RFID or a wireless system for data retrieval purposes, calibration or set up; many can also be set up and controlled via a personal computer or a smartphone. These devices are usually small, battery-powered, portable, equipped with internal memory for data storage, a connection for data retrieval of choice and sensors.

Tuesday 15 August 2017

RS-232 and RS-485 Serial Communication Protocols


http://www.readydaq.com/temperature-data-logger
The RS232/485 port consecutively sends and receives bytes filled with information one bit at a time. Although the serial method is somewhat slower than parallel communication, which allows the transmission of an entire byte at once, it is far simpler and can be employed over longer distances because power consumption is lower than that of parallel one. As an example, the IEEE 488 standard for parallel communication requires that the cabling between equipment can be no more than 20 meters total, with no more than 2 meters between any two connected devices. On the other hand, RS232/485 cabling is possible to be extended 1200 meters or greater.
Typically, RS232/485 is employed to transmit American Standard Code for Information Interchange (ASCII) data. Although National Instruments serial hardware is able to transmit 7-bit as well as 8-bit data packages, many applications use 7-bit data. Seven-bit ASCII can represent the English alphabet, decimal numbers, and common punctuation marks. It is a standard protocol that virtually all hardware and software are able to comprehend. Serial communication is completed employing three transmission lines: (1) ground, (2) transmit, and (3) receive. Due to the fact that RS232/485 communication is asynchronous, the serial port is able to send and receive data on one line while also sending and receiving data on another. Other lines are also available, but are not required nor are they employed. The crucial serial characteristics are baud rate, data bits, stop bits, and parity. These parameters must match to allow communication between a serial device and a serial port on a computer.
The RS-232 port, or ANSI/EIA-232 port, is the serial connection which one is able to come across on most PCs. It is used for many purposes, such as connecting a mouse, a printer, or a modem, as well as other various industrial instrumentation. The RS-232 protocol is able to withstand only one device connected to each port. The RS485 (EIA-485 Standard) protocol is able to have 32 devices connected to each port. With this enhanced multidrop capability, one can create networks of devices connected to a single RS-485 serial port. Noise immunity and multidrop capability make RS-485 the serial connection of choice in industrial applications which are in need of many distributed instruments and peripherals connected to a PC or other controller for data collection.

Uses for Data Loggers

data logger

There are numerous uses for autonomous data loggers, one of which being environmental monitoring: they can be taken to various locations that cannot be accessed easily with bulky temperature monitoring equipment such as mountains, deserts, jungles, mines, caves and other similar places. Data loggers, especially portable ones, can also be used in industrial and scientific surroundings – in factories and laboratories where temperature monitoring is highly wanted.
 
Another use for temperature data loggers is monitoring sensitive shipments and products, primarily fresh and prepared foods and other consumables, pharmaceuticals, organs ready for transplant and various chemicals which react to elevated temperatures and need to be kept in order. Exposing the aforementioned items to temperatures outside their designated ranges for a certain period of time can result in them being unusable. Therefore, portable data loggers are placed inside insulated containers or directly attached to products and items so as to monitor the temperature of the product being shipped. Also, the placement of data loggers and sensors is critical to the perseverance of the product: several studies have confirmed that temperatures inside a shipping container (an insulated box, a refrigerator truck or a refrigerated container) rely heavily on the proximity of the container to exterior walls and roof and to the location in regard to them.
 
Modern data loggers also come equipped with the ability to measure temperature in real time. This information can then be used to check whether the product has been exposed to temperatures higher than prescribed for too long. Analyzing these scenarios of high temperature exposure, the outcomes may be that shelf life of products has been reduced and therefore they need to be sold at a faster pace, perhaps the cooling equipment has failed during the shipping of a product but persons cannot pick up the slight difference in temperatures, or the shipment has gone bad and is unusable because of critical oscillations of temperature.
 
All of this data coming from monitoring temperature can prove to be extremely useful as to reduce costs, prolong shelf life and avoid any damage to the precious goods so that they can be usable and in top condition upon arrival.

Thursday 10 August 2017

About I²C

professional labview expert
I²C is a multi-master protocol that uses two signal lines. The two I²C signals are named ‘serial data’ (SDA) and ‘serial clock’ (SCL). There is no need of chip select (servant select) or compromise logic. Basically, any number of servants and any number of masters can be united onto these two signal lines and correspond to each other using a protocol that specifies:
 
7-bits servant addresses: every device united to the bus has got such a unique address;
certain control bits for governing the communication commence, end, and direct for any acknowledgement mechanism.
data are divided into 8-bit bytes
 
The data standard must be chosen betwixt 100 kbps, 400 kbps and 3.4 Mbps, accordingly called standard mode, a fast mode and high-speed mode. Some I²C variations contain 10 kbps and 1 Mbps as genuine speeds.
Physically, the I²C bus comprises the two active wires SDA and SCL and a ground connection. The effective wires are both bi-directional. The I2C protocol necessity states that the IC that begins a data transfer on the bus is treated the Bus Master. Therefore, at the time, all the other ICs were considered to be Bus Servants.
 
At electrical rank, there is literally no conflict at all if multiple instruments try to put any logic rank on the I²C bus lines together. If one of the drivers attempts to write a logical zero and the other a logical one, then the open-drain and pull-up arrangement ensure that there will be no shortcut and the bus will indeed see a logical zero transiting on the bus. In other words, in any conflict, a logic zero always ‘scores’.
Furthermore, the I²C protocol likewise helps at dealing with communication problems. Any apparatus present on the I²C listens to it permanently. Promising masters on the I²C encountering a START condition will wait until a STOP is encountered to attempt a new bus admission. Servants on the I²C bus will decode the device address that follows the START condition and checks if it doubles theirs. All the servants that are not addressed will wait until a STOP status is issued before listening repeatedly to the bus. Likewise, since the I²C protocol foresees active-low acknowledge bit after each byte, the master/servant couple can identify their counterpart presence. Ultimately, if anything else goes bad, this would signify that the apparatus ‘talking on the bus’ would know it by simply comparing what it sends with what is seen on the bus. If a difference is detected, a STOP case must be issued, which discharges the bus.

Wednesday 9 August 2017

I²C vs SPI - comparison

data acquisition system

Bus topology / routing / resources

I²C needs two lines, while SPI officially defines at least four signals or more if more servants are added. Some informal SPI alternatives only need three wires, that is an SCLK, SS and a bi-directional MISO/MOSI line. Nevertheless, this exercise would require one SS line per servant. SPI lacks further work, logic and/or pins if a multi-master engineering must be built on SPI. The singular problem I²C when building a system is a finite machine address space on 7 bits, overwhelmed with the 10-bits enlargement.
From this point of view, I²C is a clear winner over SPI in sparing pins, board routing and how effortless it is to build an I²C network.

Throughput / Speed

If data must be relocated at ‘high speed’, SPI is apparently the protocol of choice, over I²C. SPI is full-duplex, and I²C is not. SPI does not determine any speed limit. Exercise often go over 10 Mbps. I²C is limited to 1Mbps in Fast Mode+ and to 3.4 Mbps in High-Speed Mode. This last one requires particular I/O buffers, not regularly easily available.

Elegance

It is usually said that I²C is much more elegant than SPI and that this last one is a very ‘rough’ protocol. People tend to think the two codes are equally elegant and comparable on robustness.
I²C is elegant for it offers very advanced appearances, such as automatic multi-master clashes handling and built-in addressing management, on a very light foundation. It can be very complex, nonetheless and somewhat lacks performance.
SPI, on the other hand, is quite easy to comprehend and to implement and offers a great deal of flexibility for extensions and alternatives. The disparity is where the elegance of SPI lies. SPI should be considered as a good platform for building custom protocol piles for transmission between ICs. Thus, in accordance with to the engineer’s need, using SPI may need more work but offers raised data transfer performance and almost total freedom.
Both SPI and I2C offer favourable support for connection with low-speed machines, but SPI is improved suited to applications in which devices assign data streams, while I²C is improved at multi master ‘register access’ application.

Tuesday 8 August 2017

Basics and Applications of Optical Sensor

professional labview expert
An optical sensor is one that converts light rays into a computerized signal. To measure a physical quantity of light and, depending on the sort of sensor, translate it into a form that is readable by some unified measuring device is the purpose of an optical sensor. Optical sensors can be both external and internal. External sensors assemble and address an appropriate quantity of light, while internal sensors measure the bends and other small changes in direction.

Types of Optical Sensors

There are various kinds of optical sensors, and here are the most common types.

Through-Beam Sensors

The usual system consists of two independent components. The receiver and the transmitter are placed opposite to each other. That transmitter projects a light beam onto the receiver. A breach of the light beam is explained as a switch signal by the receiver. It is insignificant where the interruption appears.
Its advantage is that large operating distances can be attained and the recognition is separated from the object’s surface structure, colour or reflectivity.
It must be assured that the object is sufficiently huge to interrupt the light beam completely, to ensure a high operational dependability.

Diffuse Reflection Sensors

Both receiver and transmitter are in one housing. The transmitted light is reflected by the object that must be identified.
The diffused light intensity at the receiver serves as the switching condition. Regardless of the sensitivity setting the front part regularly reflects worse than the rear part and this leads to the after effect of false switching operations.

Retro-Reflective Sensors

Here, both transmitter and receiver are in the same house. Through a reflector, the radiated light beam is conducted back to the receiver. An interruption of the light beam commences a switching operation. It is not influential where the interruption occurs.
Retro-reflective sensors set up large operating distances with switching points, which are completely reproducible demanding little escalating effort. Any object interfering the light beam is precisely detected independently of its colour or surface structure.

Thursday 3 August 2017

How Stack Machines Meet the Needs of Various Systems

temperature data logger
There are various characteristics which need to be met in order for these machines to be suitable and to be fully and successfully implemented into real time systems. These characteristics are as follows: size and weight, power and cooling, operating environment, cost and performance.

Size and Weight 

It has been observed that stack computers are very simple in regards to processor complexity. However, it is the overall system complexity that determines overall system size and weight. The solution to overcoming the size and weight issue is to keep component count small. That is why stack machines are less complex than other machines and are also more reliable.

Power and Cooling

If the processor is complex, it can affect the amount of power it needs. That amount of power is related to how many transistors there are in a processor and how many pins are on the processor chip. Moreover, processors that need a lot of power-consuming high-speed memory devices can also be burdensome regarding power. Of course, power consumption directly affects cooling requirements, since all power used by a computer is eventually transmuted into heat. The cooler operation of processor components can reduce the number of component failures, thus improving reliability.

Operating Environment

Embedded processing systems are well known for extreme operating conditions. The processing system must deal with heat and cold, vibration, shock, and even radiation. Also, in remotely installed applications, the system must be able to survive without field service technicians to make repairs. The general rule to avoiding problems caused by operating environments is to keep the component count and a number of pins minuscule. Stack machines, with their low system complexity and high levels of integration, do well under these conditions.

Cost

Since the cost of a chip is related to the number of transistors and to the number of pins on the chip, low complexity stack processors are basically low in cost.
Computing Performance. Computing performance in a real time embedded control environment is not simply defined. Although raw computational performance is important, there are other factors which influence the system. An additional desirable feat is a fantastic execution in programs that are filled with procedure calls reducing program memory size.

How do RS-232, RS-422, and RS-485 compare to each other?

data acquisition device
RS-232 (ANSI/EIA-232 Standard) is the most widespread serial interface and it is used to ship as a standard component on most Windows-compatible desktop computers. Nowadays, it is more frequent to use RS-232 rather than using a USB and a converter. One downfall is that RS-232 only permits for one transmitter and one receiver on each line. RS- 232 also employs a Full-Duplex transmission method. Some RS-232 boards sold by National Instruments support baud rates up to 1 Mbit/s, but most devices are restricted to 115.2 kbit/s. On one hand, RS-422 (EIA RS-422- A Standard) is the serial connection employed primarily on Apple computers. It provides a mechanism for sending and receiving data up to 10 Mbits/s. RS-422 sends each signal employing two wires in order to increase the maximum baud rate and cable length. RS-422 is also specified for multi-drop applications where only one transmitter is linked to and sends and receives a bus of up to 10 receivers. On the other hand, RS-485 is a superset of RS-422 and expands on the capabilities of that previous model. RS-485 was manufactured to deal with the multi-drop limitation of RS-422, letting up to 32 devices to communicate through the same data line. Any of the subordinate devices on an RS-485 bus can communicate with any other 32 subordinate or ‘slave’ devices without the master device receiving any signals. Since RS-422 is a subset of RS-485, all RS-422 devices can be controlled by RS-485.
Finally, both RS-485 and RS-422 have multi-drop capability installed in them, but RS-485 allows up to 32 devices and RS-422 has a limit of only 10 devices. For both communication protocols, it is advisable that one should provide their own termination. All National Instruments RS-485 boards will work with RS-422 standards.

Wednesday 19 July 2017

How to keep multicloud complexity under control



Using multiple cloud providers provides needed flexibility, but it also multiplies the work and risk of getting out of sync
“Multicloud” means that you use multiple public cloud providers, such as Google and Amazon Web Services, AWS and Microsoft, or all three—you get the idea. Although this seems to provide the best flexibility, there are trade-offs to consider.
The drawbacks I see at enterprise clients relate to added complexity. Dealing with multiple cloud providers does give you a choice of storage and compute solutions, but you must still deal with two or more clouds, two or more companies, two or more security systems … basically, two or more ways of doing anything. It quickly can get confusing.
For example, one client confused security systems and thus inadvertently left portions of its database open to attack. It’s like locking the back door of your house but leaving the front door wide open. In another case, storage was allocated on two clouds at once, when only one was needed. The client did not find out until a very large bill arrived at the end of the month.
Part of the problem is that public cloud providers are not built to work together. Although they won’t push back if you want to use public clouds other than their own, they don’t actively support this usage pattern. Therefore, you must come up with your own approaches, management technology, and cost accounting.
The good news is that there are ways to reduce the multicloud burden.
For one, managed services providers (MSPs) can manage your multicloud deployments for you. They provide gateways to public clouds and out-of-the-box solutions for management, cost accounting, governance, and security. They will also be happy to take your money to host your applications, as well as provide access to public cloud services.
If you lean more toward the DIY approach, you can use cloud management platforms (CMPs). These place a layer of abstraction between you and the complexity of managing multiple public clouds. As a result, you use a single mechanism to provision storage and compute, as well as for security and management no matter how many clouds you are using.
I remain a fan of the multicloud approach. But you’ll get its best advantage if you understand the added complexity up front and the ways to reduce it.

6 Steps on How to Learn or Teach LabVIEW OOP - Part 2

Labview

Step 4 – Practice!
This stage is harder than the last. You need to make sure:
Each child class should exactly reflect the abstract methods. If your calling code ever cares which sub-class it is calling by using strange parameters or converting the type then you are violating LSP – the Liskov substitution principle – The L of solid.
Each child class should have something relevant to do in the abstract classes. If it has methods that make no sense this is a violation of the interface segregation principle.
Step 5 – Finish SOLID
Read about the open-closed principle and the dependency inversion principle and try it in a couple of sections of code.
Open-closed basically means that you leave interfaces (abstract classes in LabVIEW). Then you can change the behavior by creating a new child class (open for extension) without have to modify the original code (closed to modification). This goes well with the dependency inversion principle. This says that higher level classes should depend only on interfaces (again abstract classes). This means the lower level code implements these classes and so the high-level code can call the lower level code without a direct dependency.
This goes well with the dependency inversion principle. This says that higher level classes should depend only on interfaces (again abstract classes). This means the lower level code implements these classes and so the high-level code can call the lower level code without a direct dependency. This can help in places where coupling is difficult to design out.
I leave these principles to the end because I think they are the easiest to write difficult to read code. I’m still trying to find a balance with these – following them wholeheartedly creates lots of indirection which can affect readability. I also think we don’t get as much benefit in LabVIEW with these since we don’t tend to package code within projects in the same way as other languages. (this maybe a good topic for another post!)
Step 6 – Learn some design patterns
This was obviously part of the point of this article. When I came back to design patterns after understanding design better and the SOLID principles it allowed me to look at the patterns in a different way. I could relate them to the principles and I understood what problems they solved.
For example, the command pattern (where you effectively have a QMH which takes message classes) is a perfect example of a solution to meet the open-closed principle for an entire process. You can extend the message handler by adding support for new message types by creating new message classes instead of modifying the original code. This is how the actor framework works and has allowed the developers to create a framework where they have a reliable implementation of control of the actors but you can still add messages to define the functionality.
Once you understand why these design patterns exist you can then apply some critical thinking about why and when to use them. I personally dislike the command pattern in LabVIEW because I don’t think the additional overhead of a large number of message classes is worth the benefit of being able to add messages to a QMH without changing the original code.
I think this will help you to use them more effectively and are less likely to end up with a spaghetti of design patterns thrown together because that is what everyone was talking about.
Urmm… so what do I do?
So I know this doesn’t have the information you need to actually do this so much as set out a program. Actually, all the steps still follow the NI course on OOP so you could simply self-pace this for general learning material.

Friday 14 July 2017

Project Management in Medical Industry


data acquisition


The medical industry has grown multifold over the last decade and the amount of innovation and development has proved the fact that with the growth of technology we can expect better and affordable solutions in the health sector with the help of innovative medical devices. For these major medical device companies, innovation leads to prototyping which is a major constituent while developing a medical device after a thorough research. It is very important to focus solely on the development of the prototype and something which hinders the process is the development of software to control the devices. For such project managers, ReadyDAQ is the one stop shop for application development needs as it offers customizability and flexibility to develop applications for your Data acquisition devices as per your needs and requirements.
 
The usual process of developing a prototype involves extensive research and study of the subject following which data needs to be acquired from sensors and operating devices such as pumps, motors, and drivers for the smooth functioning of the device. For this, a software needs to be developed individually for each concerned device but with ReadyDAQ you have the freedom to plug and play the devices without the need to develop an exclusive application from the scratch. 
Since research projects come with tight schedules and deadlines, it is the duty of the project manager to make sure that all the time and focus is being devoted to the development of the device making ReadyDAQ the perfect solution to this problem. 
ReadyDAQ supports multiple devices at a time making it easier to control, read, acquire and store signals. In the medical industry, it is very important to get precise readings for different values hence, it plays a major role in any medical innovation or RnD center where error free values are necessary to build the perfect prototype.
Built in the LabVIEW environment, this application is the perfect solution for all the project managers out there to save on time and expense. So, if you are looking to build the perfect prototype for the next big thing in the medical device industry, try ReadyDAQ. We offer a 7 day 100% money back guarantee.

Thursday 13 July 2017

3 Reasons to Automate your Business

automation
Let's accept the fact that with every technological development that takes place, it's major focus is improved efficiency, cost cutting and better output. This is the major reason we are focused on the implementation of latest solutions for our businesses. An increasingly popular term that we come across is automation. And rightly so, it is the thing of the future which is slowly connecting all the aspects of industrialization. The process of automation, although a long one, can be easily implemented using the perfect blend of software and hardware. But what is it that makes automation our priority? Let's have a look at the three most important reasons behind this transition.
Filling the gap between supply and demand: We have to agree that with the ever increasing population, all the industries are always under the pressure of fulfilling high demand numbers. To tackle this problem, automation is an absolute necessity since it has helped increase the output multifold. This increase in the produce has also led to lesser wastage and optimum efficiency.
 Accuracy: Okay, let's just accept the fact that machine made material is better and precise when compared to the human hand. While more and more industries are making the shift to the automation technology, it is to be noted that their output has increased when compared to human support.
Cost cutting leads to increased efficiency: An automatic machine equals a hundred men. Well, even though this number might be accurate it is safe to say that a machine can give output which equals a lot of manpower. This not only saves money due to less investment in terms of salaries but also saves production time. Testing is easier and simplifies the production process.
So when we look at these factors, we realize how important automation actually is. But, as we mentioned before, complementing software is very important for such hardware and that is where ReadyDAQ jumps in. High end machinery makes use of a lot of operational devices and so ReadyDAQ offers a development solution for all its software needs without actually having to start from the scratch. Supporting simultaneous operation of multiple devices, it is the perfect solution for all industries trying to implement automation and it's components. So, what are you waiting for? Download the 30- day trial version today and get a feel of the product before making that purchase!

Automation to Replace Human Hands?


automation

As we move deeper into the technologically advanced methodologies and manufacturing processes, we realize the power of the human mind. The mind which has developed a new league of technical procedures which have made our lives easy and working easier. Manual labor is on the way to extinction in a few years from now, thanks to the highly advanced machinery and automation industry. Automation, combined with the words automatic and execution have enabled a major chunk of processes to be executed without the human component. And with the amount of innovation taking place around the globe, it is surprising how robots and machinery have taken over the daunting human tasks.

But why do we support the intrusion of automation into our development process and how is it benefiting the industries? There is no doubt in the fact that machines can outperform humans in every aspect.
The precision and efficiency of an automatic machine are way better than a hundred humans working together. This is the most important reason as to why people prefer machines over man. While a human would numerous hours to assemble a product, a machine can manufacture and assemble the same within minutes. This not only saves a lot of time but expense as well. Automation in industries is a one-time investment which gives you long term benefits and efficient output. No doubt the machines demand maintenance, but it is still economical when compared to manual labor.

In huge manufacturing units, automation is a widespread concept which has taken over the human hand mainly because of the demand and supply chain where there is an excessively large need for manufactured goods.
But, it should also be noted that with an efficient hardware that goes into automating a factory, compatible and complementing software is also necessary. It is an intelligent software system that makes the machine efficient in providing optimum output. For this reason, ReadyDAQ your one stop shop for all the development needs has been created. It offers solutions to your software problems and is programmed to handle all operational devices such as pumps, motors, and sensors. A plug and play medium for devices, it helps the automation process in factories and industries by allowing the users to connect devices and without any major configuration or development operating it. It comes with a 30-day trial version to get a feel of the working before you actually make a purchase. So get yours today!

6 Steps on How to Learn or Teach LabVIEW OOP - Part 1

If you follow the NI training then you learn how to build a class on Thursday morning and by Friday afternoon you are introduced to design patterns. Similarly when I speak to people they seem keen to quickly get people on to learning design patterns – certainly, in the earlier days of adoption this topic always came up very early.
I think this is too fast. It adds additional complexity to learning OOP and personally I got very confused about where to begin.
Step 1 – The Basics
Learn how to make a class and the practical elements like how the private scope works. Use them instead of whatever you used before for modules. e.g. action engines or libraries. Don’t worry about inheritance or design patterns at this stage, that will come.
Step 2 – Practice!
Work with the encapsulation you now have and refine your design skills to make objects that are highly cohesive and easy to read. Does each class do one job? Great you have learned the single responsibility principle, the first of the SOLID principles of OO design. Personally, I feel this is the most important one.
If your classes are large then make them smaller until they do just one job. Also, pay attention to coupling. Try to design code that doesn’t couple too many classes together – this can be difficult at first but small, specific classes help.
Step 3 – Learn inheritance
Use dynamic dispatch methods to implement basic abstract classes when you need functionality that can be changed e.g. a simulated hardware class or support for two types of data logs. I’d look at the channeling pattern at this point too. Its a very simple pattern that uses inheritance and I have found helpful in a number of situations. But no peeking at the others!

Friday 7 July 2017

Setting up LabVIEW Project

labview freelancer consultant
Complete the following steps to set up the LabVIEW project:
 
  1. Launch LabVIEW by selecting Start»All Programs»National Instruments»LabVIEW.
  2. Click the Empty Project link in the Getting Started window to display the Project Explorer window. You can also select File»New Project to display the Project Explorer window.
  3. Select Help and make sure that Show Context Help is checked. You can refer to the context help throughout this process for information about items in the Project Explorer window and in your VIs.
  4. Right-click the top-level Project item in the Project Explorer window and select New»Targets and Devices from the shortcut menu to display the Add Targets and Devices dialog box.
  5. Make sure that the Existing target or device radio button is selected.
  6. Expand Real-Time CompactRIO.
  7. Select the CompactRIO controller to add to the project and click OK.
  8. Select FPGA Interface from the Select Programming Mode dialog box to put the system into FPGA Interface programming mode.
  9. Tip Tip  Use the CompactRIO Chassis Properties dialog box to change the programming mode in an existing project. Right-click the CompactRIO chassis in the Project Explorer window and select Properties from the shortcut menu to display this dialog box.
  10. Click Discover in the Discover C Series Modules? dialog box if it appears.
  11. Click Continue.
  12. Drag and drop the C Series module(s) that will run in Scan Interface mode under the chassis item. Leave any modules you plan to write FPGA code for under the FPGA target.

Real-Time Processor

professional labview expert
An industrial 400 MHz Freescale MPC5200 processor that deterministically acquires one’s LabVIEW Real-Time applications on the reliable Wind River VxWorks real-time operating system features the CompactRIO installed the system. Built-in operations for transferring data between the real-time processor within the CompactRIO embedded system and the FPGA are available in LabVIEW. One can pick from more than 600 built-in LabVIEW functions to frame its multithreaded installed system for real-time analysis, control, data logging, and communication. To save on development time, one can likewise combine existing C/C++ code with LabVIEW Real-Time code.

Starting a New CompactRIO Project in LabVIEW

One should commence by creating a new project in LabVIEW, where one can manage some hardware resources and code.
1.       By selecting File » New Project, one creates a new project in LabVIEW.
2.        Right-click on the Project feature at the top of the tree and by selecting New » Targets and Devices, one adds existing CompactRIO system to the project.
3.      One can add offline systems, or discover, by this dialogue, systems on existing network. To enlarge the Real-Time CompactRIO folder, select existing system, and click OK. Note: LabVIEW might not find it on the network if the existing system is not listed. Ensure that the existing system is well configured with a valid IP address in Measurement & Automation Explorer. One can likewise select to manually enter the IP address if the existing system is on a remote subnet.

Select the Appropriate Programming Model

Two programming models are granted by LabVIEW for CompactRIO systems. If one has LabVIEW FPGA and LabVIEW Real-Time on the existing development CPU, the one can be incited to pick which programming model he/she would like to use. In the LabVIEW Project, the one can change this setting later, if needed.
Scan Interface (CompactRIO Scan Mode) option allows a person to programme the real-time processor of the already existing CompactRIO system on a computer, but not the FPGA. NI provides a pre-defined personality for the FPGA that regularly scans the I/O and allocates it in a memory map, in this mode, making it accessible to LabVIEW Real-Time. For applications that lack single-point access to I/O at rates of a few hundred hertz, CompactRIO Scan Mode is sufficient. If someone wants to learn more about scan mode, the one should read the “Using CompactRIO Scan Mode” with “NI LabVIEW” white paper and sight the benchmarks.
LabVIEW FPGA Interface option allows a person to unlock the true power of CompactRIO throughout customising the FPGA personality in addition to the programming of the real-time processor and accomplishing performance that would typically lack custom hardware. One can implement custom triggering and timing, off-load signal analysis and processing, create custom protocols, and access I/O at its maximum rate by using LabVIEW FPGA.
After that, one should select the appropriate programming model for the existing application.
Consequently, LabVIEW will then try to detect C Series I/O modules present in the existing system and automatically add them to the LabVIEW Project and the chassis. Note: If a person’s existing system was not discovered and one chooses to add it offline, one will need to add the chassis and C Series I/O manually. For scan mode and FPGA mode, The LabVIEW Help online discusses this operation.