Showing posts with label labview developers. Show all posts
Showing posts with label labview developers. Show all posts

Monday, 18 December 2017

Engineers Turn to Automated Test Equipment to Save Time

http://www.readydaq.com/content/blog/engineers-turn-automated-test-equipment-save-time
With engineers rushing tests in order to hit tight product deadlines, the market for test equipment that automatically detects faults in semiconductors and other components is growing.
Setting aside time for testing has been a struggle for electrical engineers. The shrinking size - and increasing complexity - of semiconductor circuits is not making life any easier. Nearly 15% of wireless engineers are outsourcing final testing and more than 45% contract manufacturing – when most semiconductor testing takes place.
Almost 65% of the survey respondents said that testing is still a challenge in terms of time consumption. New chips designed for tiny connected sensors and autonomous cars also require rigorous testing to ensure reliability.
Tight deadlines for delivering new products is forcing engineers toward using automated test equipment, also known as ATE, to quickly identify defects in semiconductors, especially those used in smartphones, communication devices, and consumer electronics.
The global automated test equipment market is estimated to reach $4.36 billion in 2018, up from $3.54 billion in 2011, according to Transparency Market Research, a technology research firm.
Automated test equipment is used extensively in semiconductor manufacturing, where integrated circuits on a silicon chip must be tested before it is prepared for packaging. It cuts down on the time it takes to test more complex chips, which are incorporating higher speeds, performance, and pin counts. Automatic testing also helps to locate flaws in system-on-chips, or SoCs, which often contain analog, mixed-signal, and wireless parts on the same silicon chip.


Saturday, 16 December 2017

Semiconductor Testing


http://www.readydaq.com/content/blog/semiconductor-testing

Automated test equipment (ATE) is computer-controlled test and measurement equipment that allows for testing with minimal human interaction. The tested devices are referred to as a device under test (DUT). The advantages of this kind of testing include reducing testing time, repeatability, and cost efficiency in high volume. The chief disadvantages are the upfront costs of programming and setup.
Automated test equipment can test printed circuit boards, interconnections, and verifications. They are commonly used in wireless communication and radar. Simple ATEs include volt-ohm meters that measure resistance and voltages in PCs; complex ATE systems have several mechanisms that automatically run high-level electronic diagnostics.
ATE is used to quickly confirm whether a DUT works and to find defects. When the first out-of-tolerance value is detected, the testing stops and the device fails.

Semiconductor Testing

For ATEs that test semiconductors, the architecture consists of a master controller (a computer) that synchronizes one or more sources and capture instruments, such as an industrial PC or mass interconnect. The DUT is physically connected to the ATE by a machine called a handler, or prober, and through a customized Interface Test Adapter (ITA) that adapts the ATE's resources to the DUT.
When testing packaged parts or directly on the silicon wafer, a handler is used to place the device on a customized interface board and silicon wafers are tested directly with high precision probes.

Test Types

Logic Testing

Logic test systems are designed to test microprocessors, gate arrays, ASICs and other logic devices.
Linear or mixed signal equipment tests components such as analog-to-digital converters (ADCs), digital-to-analog converters (DACs), comparators, track-and-hold amplifiers, and video products. These components incorporate features such as, audio interfaces, signal processing functions, and high-speed transceivers.
Passive component ATEs test passive components including capacitors, resistors, inductors, etc. Typically, testing is done by the application of a test current.
Discrete ATEs test active components including transistors, diodes, MOSFETs, regulators, TRIACS, Zeners, SCRs, and JFETs.

Printed Circuit Board Testing

Printed circuit board testers include manufacturing defect analyzers, in-circuit testers, and functional analyzers.
Automated Test Equipment imageManufacturing defect analyzers (MDAs) detect manufacturing defects, such as shorts and missing components, but can't test digital ICs as they test with the DUT powered down (cold). As a result, they assume the ICs are functional. MDAs are much less expensive than other test options and are also referred to as analog circuit testers.
In-circuit analyzers test components that are part of a board assembly. The components under test are "in a circuit." The DUT is powered up (hot). In-circuit testers are very powerful but are limited due to the high density of tracks and components in most current designs. The pins for contact must be placed very accurately in order to make good contact. They are also referred to as digital circuit testers or ICT.
A functional test simulates an operating environment and tests a board against its functional specification. Functional automatic test equipment (FATE) unpopular due to the equipment not being able to keep up with the increasing speed of boards. This causes a lag between the board under test and the manufacturing process. There are several types of functional test equipment and they may also be referred to as emulators.

Interconnection and Verification Testing

Test types for interconnection and verification include cable and harness testers and bare-board testers.
Cable and harness testers are used to detect opens (missing connections), shorts (open connections) and miswires (wrong pins) on cable harnesses, distribution panels, wiring looms, flexible circuits, and membrane switch panels with commonly-used connector configurations. Other tests performed by automated test equipment include resistance and hipot tests.
Bare board automated test equipment is used to detect the completeness of a PCB circuit before assembly and wave solder.

Wednesday, 6 December 2017

Exploiting LabVIEW Libraries


labview expert
Have you ever viewed a LabVIEW VI Hierarchy and become frustrated with not being able to locate a VI you needed to open?
Do you have large applications composed of similar modules but fear to jump, with both feet, into the learning curve of LVOOP?
Did you ever try to duplicate a sub-VI at the start of a new set of functions and find yourself deep in a nest of cross-linked VIs, or save a VI only to realize that the most suitable name has already been used?
Then using LabVIEW Libraries may be useful to you
Libraries are a feature available in the LabVIEW project or they can be created stand-alone*. They have a number of features that allow you to specify shared properties and attributes of related VIs and custom controls.
In short, many of the features of LVOOP are available without the complications required for Dynamic Dispatching. The remainder of this document will serve as a tutorial that demonstrates how to create, define, and clone a library. Additional notes are included to illustrate how these features can be exploited to help you develop more robust applications that are easier to support than applications that do not use libraries.
*Libraries can be created stand-alone from the LabVIEW splash screen using the method:
File >>> New … >>> Other Files >>> Library
You can create a new library from the project by right-clicking the “My Computer” icon and selecting “New >>> Library”. Save it to a unique folder that will contain all of the files associated with the library.
Open the properties screen and then open the icon editor) to compose a Common Icon for the library and its members.
Take a little time to create the icon because it will be shared by all of the members of the library. Do not get carried away and fill-up the entire icon. Leave some white space so that the icons of the component VIs can be customized to illustrate their role in the functionality of the library.
Create virtual folders in the library to help organize the VIs contained in it. I usually use three folders but you can use more or less depending on your needs and preferences. I use one to hold the controls, and another pair for the public and private VIs. I do not use auto-populating folders for a number of reasons.
I can control which VIs are included and which are not. Occasionally temporary VIs are created to do some basic testing and they are never intended to be part of the library. If functionality changes and the temporary VI breaks due to the change, the library may cause a build to fail due to the broken VI.
I can easily move a VI from private to public without having to move the VI on disk and then properly updating source code control to reflect the change.
I can keep the file paths shorter using the virtual folders while maintaining the structure of the project.
Additional virtual folders can be added if you want to further break-down the organization of the VIs in the library. If developing a library that will be used by other developers and or be as a tool for others, you may want to include a folder for the VIs that define the API your library offers. The API can also be divided into additional virtual folders to break-down the interface into functional areas if you wish. Implement the Logical Grouping of sub-VIs as needed for your library.
Set the Access Scope of the private virtual folder to private. While the private folder and the setting of the access scope can be optional, taking advantage of this options will help you and the users of your library identify which VIs are not intended for use outside of the library. Attempting to use a VI with a private scope from outside the library itself will break the calling VI and make it very obvious that the VI is not intended for public use.
Developing applications using libraries differs little from developing without libraries with one exception; there is no additional work to use them. The exception is illustrated in Figure 8 where the name of the VI is highlighted. While the VI named in the project is shown as “Init_AI.vi” the actual name of the VI is “DAQ.lvib:AI.lvlib:Init_AI.vi”. The difference is the result of what is called “Name Mangling”. The actual name of the VI is prefixed by the library names that own the VI. This is a powerful feature that goes a long way toward avoiding cross-linking and will let us easily clone a library to be used as the starting point of a similar library.
The Save as the screen for the library will not only let us define the library name but also where in the project the library will be placed. This is handy for nested libraries but not critical. The libraries can be moved around in the project or between libraries as need using the project window. When a library is cloned using the Save As an option, all of the VIs contained in the original library are duplicated and re-linked to the VIs in the new library. There is NO chance of cross-linking when Cloning a library!
Libraries can help in all phases of an application from initial development to long-term support through to knowledge transfer. Remember, “Libraries” are your friend!

LabVIEW Improvements


labview developers

LabVIEW passed its 30 year anniversary in 2016,  and six months ago, National Instruments, has launched a considerably updated version of LabVIEW - its Next Generation LabVIEW NXG 1.0.
LabVIEW NXG is a totally reworked version of LabVIEW and this enables it to offer a considerably improved level of performance. By adopting an approach where LabVIEW has been started again from the ground up, LabVIEW NXG enables users to see significant improvements in performance as a result of the new code.
LabVIEW NXG offers some significant definitive improvements over the previous implementation of LabVIEW:
  • Plug & Play: a lot of work has gone into enabling LabVIEW NXG to provide easy set-up of hardware connections. It has true plug and play functionality.
  • IDE: The LabVIEW NXG environment has been totally overhauled to take elements of popular commercial software and replicate the attributes of the environment to make it more intuitive.
  • Tutorials: To facilitate the speedy uptake of newcomers to LabVIEW, the new LabVIEW NXG has inbuilt walk-throughs and other integrated learning facilities. This has been shown to greatly speed up the time which it takes for newcomers to be able to proficiently programme in LabVIEW. It is even possible to undertake a number of standard tasks without “hitting the code.”
National Instruments will be running both the traditional LabVIEW, i.e. LabVIEW 2017 which has also been launched alongside the new next-generation LabVIEW NXG, but ultimately when total compatibility has been established the two will converge enabling users to benefit from the new streamlined core.
Users of LabVIEW will be given access to both LabVIEW 2017 and later versions as well as LabVIEW NXG. In this way, they can make the choice of which version suits their application best.
National Instruments spokespeople stressed that the traditional development line of LabVIEW will continue to be maintained so that the large investment in software and applications that users have is not at risk. However, drivers and many other areas are already compatible with both lines.
“Thirty years ago, we released the original version of LabVIEW, designed to help engineers automate their measurement systems without having to learn the esoterica of traditional programming languages. LabVIEW was the ‘nonprogramming’ way to automate a measurement system,” said Jeff Kodosky, NI co-founder and business and technology fellow, known as the ‘Father of LabVIEW.’
“For a long time, we focused on making additional things possible with LabVIEW, rather than furthering the goal of helping engineers automate measurements quickly and easily. Now we are squarely addressing this with the introduction of LabVIEW NXG, which we designed from the ground up to embrace a streamlined workflow. Common applications can use a simple configuration-based approach, while more complex applications can use the full open-ended graphical programming capability of the LabVIEW language, G.”

Monday, 20 November 2017

9 Things to Consider When Choosing Automated Test Equipment


automation

Automated test equipment (ATE) have the ability to reduce the costs of testing and make sure that lab teams can focus on other, more important tasks. With ATE, productivity, and efficiency is boosted to a maximum level due to cutting out the unnecessary tasks and daily activities.
However, you should not just cash out and invest in automated test equipment, there are elements that factors that are important to find the system that suits you best. Our team at ReadyDAQ has prepared 12 things you should consider before choosing automated test equipment.

1. Endurance and Compactness

One of the most important things is that the ATE system your company picks is designed for optimal performance over the long-term. Take a careful look at connections and components and make a conclusion whether they will survive over repeated use.Many lab teams are often struggling to find large areas for their testing operations. The automated test equipment should also be compact.

2. Customer Experience

Are other customers satisfied the support and other things they went through? Does the company you bought ATE from provide full support? You don't have to be the expert in automated test equipment, but they do. And their skills and expertise have to be available to you for when you need it. Customer support and the overall customer experience is a huge factor!

3. Scalability and Compatibility

One purchase does not have to be final. It often isn't You should check whether the equipment you ordered can be expanded or scaled over time. Your needs might change and you want ATE to adapt to your needs.
When compatibility comes to mind, we want to make sure that the equipment is built following all industry standards. Cross-compatibility is often important in situations where we no longer need or have lost the access to certain products. Better safe than sorry.

4. Comprehensive

Think of all the elements needed for testing. Even better, make a list. Does the equipment you have in mind cover ALL required elements? Don't forget about power and signaling, are they included too?

5. High Test Coverage and Diagnostics 

The ATE system has to be able to provide full coverage and give insights on all components of the processed product. This can help decrease the number of possible errors and failures later on.
How about diagnostics? Does the testing system provide robust diagnostic tools to make sure the obtained results are reliable and true?

6. Cost per Test

How much does a single test cost? You have to think and plan long-term, so a single test cost can help you calculate and make an assumption whether the system provides real value for the money invested.

7. Testimonials and Warranty 

Are other customers satisfied? Can the company direct you to testimonials from previous customers? What do their previous customers have to say about the systems and their performance?
Also, you don't want to be left hanging in case the systems starts malfunctioning or simply stops working. Does the ATE system come with a comprehensive warranty? Make sure you’re protected against damages that might happen in testing and see that the warranty covers that too.

8. Manufacturer Reputation

When did you first hear about the company? How? Did someone (besides them) say anything good about them? Is the company known for the high quality of their equipment? Discuss their past projects and learn more about the value their products provide.

9. Intuitive Performance

At first sight, is the system easy to use or way too complicated that it would require weeks of training for everyone in the lab? Does it offer intuitive performance within the testing procedure? Your team should be able to begin testing without having to go over every point in the technical manual in pinpoint detail.
Our team at ReadyDAQ is here to help you select the perfect automated test equipment for your lab.

Wednesday, 30 August 2017

IoT: Standards, Legal Rights; Economy and Development

labview developers

It is safe to say that, at this point, the fragmented nature of IoT will hinder, or even discourage the value for users and industry. If IoT products happen to be poorly integrated, or inflexible regarding connectivity or are complex to operate, these factors can drive users as well as developers away from IoT. Also, poorly designed IoT devices can have negative consequences for the networks they connect to. Therefore, standardization is a logical next step as it can bring appropriate standards, models, and best practices. The standardization can, in turn, bring about user benefits, innovation and economic benefits.
 
Moreover, a widespread use of IoT devices brings about many regulatory and legal issues. Yet, since IoT technology is rapidly changing, many regulatory bodies cannot keep up with the change, so these bodies also need to adapt to the volatility of IoT technologies. But one of the issues which frequently comes in action is what to do when IoT devices collect data in one jurisdiction and transmit it to another jurisdiction with, for example, more lenient laws for data protection. Also, the data collected from IoT devices are often times liable to misuse, potentially causing legal issues for some users.
 
Other burning legal issues are the conflict between lawful surveillance and civil rights; data retention and ‘the right to be forgotten’; and legal liability for unaware users. Although the challenges are many in number and great in scope, IoT needs laws and regulations which protect the user and their rights but also do not stand in the way of innovation and trust.
 
Finally, Internet of Things can bring great and numerous benefits to developing countries and economies. Many areas can be improved through IoT: agriculture, healthcare, industry, to name a few. IoT can offer a connected ‘smart’ world and link aspects of people’s everyday lives into one huge web. IoT affects everything around it, but the risks, promises and possible outcomes need to be talked about and debated if one is to pick the most effective ways to go forward.

Tuesday, 29 August 2017

IoT: Security and Privacy


data logger

Two key IoT issues, which are also intertwined, are security and privacy: the data IoT devices store and work with needs to be safe from hackers, so as not to have sensitive data exposed to third parties. It is of utmost importance that IoT devices be secure from vulnerabilities and private so that users would feel safe in their surroundings and trust that their data shall not be exposed or worse, sold through illegal channels. Also, since devices are becoming more and more integrated into our everyday lives (many people store their credentials on their devices, for example), poorly secured devices can serve as entry points for cyber-attacks and leave data unprotected.
 
The nature of IoT devices means that every unsecured or inadequately secured devices pose a potential threat. This security problem is even deeper since various devices can connect to each other automatically, thus refraining the user from knowing at first glance whether a security issue exists. Therefore, developers and users of IoT devices have an obligation to make sure that no other devices come in any potential harm, so they constantly develop and test security solutions for these challenges.
 
The second key issue, privacy, is thought to be a factor which holds back the full development and implementation of IoT. Many users are concerned about their rights when it comes to their data being tracked, collected and analyzed. IoT also raises concerns regarding the potential threat of being tracked, the inability of discarding certain data collection, surveillance etc. Strategies need to be implemented which, although bring innovation, still respect user privacy choices and expectations. In order for Internet of Things to truly be accepted, these challenges need to be looked into and these problems need to be overcome, which is a great task and a test both for developers and for users.

Saturday, 26 August 2017

IoT: Summary

data logging
The Internet of Things (or shortened ‘IoT’) is a hot topic in today’s world which carries extraordinary significance in socio-economic and technical aspects of everyday life. Products designed for consumers, long-lasting goods, automobiles and other vehicles, sensors, utilities and other everyday objects are able to become connected among themselves through the Internet and strong data analytic capabilities and therefore transform our surroundings. Internet of Things is forecast to have an enormous impact on the economy; some analysts anticipate almost 100 billion interconnected IoT devices. On the other hand, other analysts proclaim that IoT devices shall input into the global economy more than $11 trillion by 2025.
However, the Internet of Things comes with many important challenges which, if not overcome, could diminish or even put a stop to the progress of it thus failing to realize all its potential advantages. One of the greatest challenges is security: the newspapers are filled with headlines alerting the public to the dangers of hacking internet-connected devices, identity theft and privacy intrusion. These technical and security challenges remain and are constantly changing and developing; at the same time, new legal policies are emerging.
This document’s purpose is to help the Internet Society community find their way in the discourse about the Internet of Things regarding its pitfalls, shortcomings and promises.
Many broad ideas and complex thoughts surround the Internet of Things and in order to find one’s way, the key concepts that should be looked into as they represent the foundation of circumstances and problems of IoT are:
- Transformational Potential: If IoT takes off, a potential outcome of it would be a ‘hyperconnected world’ where limitations on applications or services that use technology cease to exist.
- IoT Definitions: although there is not one universal definition, the term Internet of Things basically refers to several connected objects, sensors or items (not considered computers) which create, exchange and control data with next to none human intervention.
- Enabling Technologies: Cloud computing, data analytics, connectivity and networking all lead to the ability to combine and interconnect computers, sensors and networks all in order to control other devices.
- Connectivity Models: There are four common communication models and are as following: Device-to-Device, Device-to-Cloud, Device-to-Gateway, and finally Back-End Data-Sharing. These models show how flexible IoT devices can be when connecting and when providing value to their respective users.

Sunday, 20 August 2017

LabVIEW Projects you should Know


labview
STÄUBLI LABVIEW INTEGRATION LIBRARY
The DSM LabVIEW-Stäubli Control Library is created to simplify communications between a host PC running LabVIEW and a Stäubli robotic motion controller so as to control the robot from the LabVIEW environment. 
Stäubli Robots are usually found in the automation industry. The standard Staubli programming language, VAL3, is an adjustable language allowing for a wide variety of tasking. Although the VAL3 language works well in its environment, there are limited options for connecting the robot to an existing PC-based test & measurement system. The LabVIEW language, on the other hand, has been created from the start to run systems found in a research environment. The DSM LabVIEW-Staubli Integration Library lets the user promptly create applications for a Staubli robot using the familiar LabVIEW programming language.
 
 
AUTOMATED CRYOGENIC TEST STATION
A test station was built with the intent in mind to automate cyclic cryogenic exposure.  A LabVIEW program was inserted to automate the process and collect data. The software featured:
Checked the temperature of up to 8 thermocouples
Checked the life status of test specimens twice per cycle
Automated backups to allow for data recovery
System was integrated with a pneumatic control board and safety features
 
 
TENSILE TESTER CONTROL PROGRAM
This system is able to record high-resolution x-ray imagery of test subjects of aerospace alloys while they are under tensile and cyclic fatigue tests.  This capability can improve understanding of how grain refinement is used to enhance material properties.  The tensile tester can function in multiple modes of operation. The sample can be fully rotated within the tester, permitting three-dimensional imagery of samples.
 
DYNAMOMETER TEST STATION
A test station designed to characterize piezoelectric motors was built, with programmable current source and a DC motor integrated into the system to apply a range of resistive torque loads to the tested motor.  A torque load cell and a high-resolution encoder were used to measure torque and speed, which is collected at each resistive torque level, forming a torque curve. A LabVIEW project was programmed to run the test. Test settings were configured in the program and data was collected by an NI DAQ card. The program also included data manipulation and analysis.
 
OPEN-LOOP ACTUATOR CONTROLLER
The goal is to characterize the actuator's performance in open-loop so that a closed-loop control scheme can be developed. This program can output voltage waveforms as well as voltage steps up to 40V. Voltage duration is programmable down to the millisecond and an encoder is integrated into the system and readings are real time. The encoder features resolution on a micron level and experiences exceptional noise due to the vibrations present in the system. The data is filtered after the test to report accurate, low-noise data.

Tuesday, 8 August 2017

Basics and Applications of Optical Sensor

professional labview expert
An optical sensor is one that converts light rays into a computerized signal. To measure a physical quantity of light and, depending on the sort of sensor, translate it into a form that is readable by some unified measuring device is the purpose of an optical sensor. Optical sensors can be both external and internal. External sensors assemble and address an appropriate quantity of light, while internal sensors measure the bends and other small changes in direction.

Types of Optical Sensors

There are various kinds of optical sensors, and here are the most common types.

Through-Beam Sensors

The usual system consists of two independent components. The receiver and the transmitter are placed opposite to each other. That transmitter projects a light beam onto the receiver. A breach of the light beam is explained as a switch signal by the receiver. It is insignificant where the interruption appears.
Its advantage is that large operating distances can be attained and the recognition is separated from the object’s surface structure, colour or reflectivity.
It must be assured that the object is sufficiently huge to interrupt the light beam completely, to ensure a high operational dependability.

Diffuse Reflection Sensors

Both receiver and transmitter are in one housing. The transmitted light is reflected by the object that must be identified.
The diffused light intensity at the receiver serves as the switching condition. Regardless of the sensitivity setting the front part regularly reflects worse than the rear part and this leads to the after effect of false switching operations.

Retro-Reflective Sensors

Here, both transmitter and receiver are in the same house. Through a reflector, the radiated light beam is conducted back to the receiver. An interruption of the light beam commences a switching operation. It is not influential where the interruption occurs.
Retro-reflective sensors set up large operating distances with switching points, which are completely reproducible demanding little escalating effort. Any object interfering the light beam is precisely detected independently of its colour or surface structure.

Thursday, 3 August 2017

How do RS-232, RS-422, and RS-485 compare to each other?

data acquisition device
RS-232 (ANSI/EIA-232 Standard) is the most widespread serial interface and it is used to ship as a standard component on most Windows-compatible desktop computers. Nowadays, it is more frequent to use RS-232 rather than using a USB and a converter. One downfall is that RS-232 only permits for one transmitter and one receiver on each line. RS- 232 also employs a Full-Duplex transmission method. Some RS-232 boards sold by National Instruments support baud rates up to 1 Mbit/s, but most devices are restricted to 115.2 kbit/s. On one hand, RS-422 (EIA RS-422- A Standard) is the serial connection employed primarily on Apple computers. It provides a mechanism for sending and receiving data up to 10 Mbits/s. RS-422 sends each signal employing two wires in order to increase the maximum baud rate and cable length. RS-422 is also specified for multi-drop applications where only one transmitter is linked to and sends and receives a bus of up to 10 receivers. On the other hand, RS-485 is a superset of RS-422 and expands on the capabilities of that previous model. RS-485 was manufactured to deal with the multi-drop limitation of RS-422, letting up to 32 devices to communicate through the same data line. Any of the subordinate devices on an RS-485 bus can communicate with any other 32 subordinate or ‘slave’ devices without the master device receiving any signals. Since RS-422 is a subset of RS-485, all RS-422 devices can be controlled by RS-485.
Finally, both RS-485 and RS-422 have multi-drop capability installed in them, but RS-485 allows up to 32 devices and RS-422 has a limit of only 10 devices. For both communication protocols, it is advisable that one should provide their own termination. All National Instruments RS-485 boards will work with RS-422 standards.

Friday, 7 July 2017

Setting up LabVIEW Project

labview freelancer consultant
Complete the following steps to set up the LabVIEW project:
 
  1. Launch LabVIEW by selecting Start»All Programs»National Instruments»LabVIEW.
  2. Click the Empty Project link in the Getting Started window to display the Project Explorer window. You can also select File»New Project to display the Project Explorer window.
  3. Select Help and make sure that Show Context Help is checked. You can refer to the context help throughout this process for information about items in the Project Explorer window and in your VIs.
  4. Right-click the top-level Project item in the Project Explorer window and select New»Targets and Devices from the shortcut menu to display the Add Targets and Devices dialog box.
  5. Make sure that the Existing target or device radio button is selected.
  6. Expand Real-Time CompactRIO.
  7. Select the CompactRIO controller to add to the project and click OK.
  8. Select FPGA Interface from the Select Programming Mode dialog box to put the system into FPGA Interface programming mode.
  9. Tip Tip  Use the CompactRIO Chassis Properties dialog box to change the programming mode in an existing project. Right-click the CompactRIO chassis in the Project Explorer window and select Properties from the shortcut menu to display this dialog box.
  10. Click Discover in the Discover C Series Modules? dialog box if it appears.
  11. Click Continue.
  12. Drag and drop the C Series module(s) that will run in Scan Interface mode under the chassis item. Leave any modules you plan to write FPGA code for under the FPGA target.

Real-Time Processor

professional labview expert
An industrial 400 MHz Freescale MPC5200 processor that deterministically acquires one’s LabVIEW Real-Time applications on the reliable Wind River VxWorks real-time operating system features the CompactRIO installed the system. Built-in operations for transferring data between the real-time processor within the CompactRIO embedded system and the FPGA are available in LabVIEW. One can pick from more than 600 built-in LabVIEW functions to frame its multithreaded installed system for real-time analysis, control, data logging, and communication. To save on development time, one can likewise combine existing C/C++ code with LabVIEW Real-Time code.

Starting a New CompactRIO Project in LabVIEW

One should commence by creating a new project in LabVIEW, where one can manage some hardware resources and code.
1.       By selecting File » New Project, one creates a new project in LabVIEW.
2.        Right-click on the Project feature at the top of the tree and by selecting New » Targets and Devices, one adds existing CompactRIO system to the project.
3.      One can add offline systems, or discover, by this dialogue, systems on existing network. To enlarge the Real-Time CompactRIO folder, select existing system, and click OK. Note: LabVIEW might not find it on the network if the existing system is not listed. Ensure that the existing system is well configured with a valid IP address in Measurement & Automation Explorer. One can likewise select to manually enter the IP address if the existing system is on a remote subnet.

Select the Appropriate Programming Model

Two programming models are granted by LabVIEW for CompactRIO systems. If one has LabVIEW FPGA and LabVIEW Real-Time on the existing development CPU, the one can be incited to pick which programming model he/she would like to use. In the LabVIEW Project, the one can change this setting later, if needed.
Scan Interface (CompactRIO Scan Mode) option allows a person to programme the real-time processor of the already existing CompactRIO system on a computer, but not the FPGA. NI provides a pre-defined personality for the FPGA that regularly scans the I/O and allocates it in a memory map, in this mode, making it accessible to LabVIEW Real-Time. For applications that lack single-point access to I/O at rates of a few hundred hertz, CompactRIO Scan Mode is sufficient. If someone wants to learn more about scan mode, the one should read the “Using CompactRIO Scan Mode” with “NI LabVIEW” white paper and sight the benchmarks.
LabVIEW FPGA Interface option allows a person to unlock the true power of CompactRIO throughout customising the FPGA personality in addition to the programming of the real-time processor and accomplishing performance that would typically lack custom hardware. One can implement custom triggering and timing, off-load signal analysis and processing, create custom protocols, and access I/O at its maximum rate by using LabVIEW FPGA.
After that, one should select the appropriate programming model for the existing application.
Consequently, LabVIEW will then try to detect C Series I/O modules present in the existing system and automatically add them to the LabVIEW Project and the chassis. Note: If a person’s existing system was not discovered and one chooses to add it offline, one will need to add the chassis and C Series I/O manually. For scan mode and FPGA mode, The LabVIEW Help online discusses this operation.

Monday, 3 July 2017

CompactRIO Scan Mode Tutorial

Labview projects
This section will teach a person how to create a basic control application on CompactRIO using scan mode. One should see the LabVIEW FPGA Tutorial if the choice is to use the LabVIEW FPGA Interface. One should then have a new LabVIEW Project that consists of the existing CompactRIO system, including the controller, C Series I/O modules, and chassis. An NI 9211 Thermocouple input module will be used in this tutorial; nonetheless, for any analogue input module, the process can be followed.
1.       The project is saved by selecting File»Save and entering Basic control with scan mode. Click OK.
2.       This project will only consist of one VI, which is the LabVIEW Real-Time application that runs installed on the CompactRIO controller. Right-clicking on the CompactRIO real-time controller in the project and selecting New»VI saves the VI as RT.vi.This one is created by the VI.
3.       Three routines are included in the key operation of this application: start up, run, and shutdown. An effortless way to accomplish this order of operation is a flat sequence structure. Place with three frames on the existing RT.vi block diagram a flat sequence structure.
4.       Then, a timed loop to the Run frame of the sequence structure should be inserted. The capability to synchronise code to various time basis, including the NI Scan Engine that reads and writes scan mode I/O is provided by timed loops.
5.       If the timed loop is to be configured, one should double-click on the clock icon on the left input node.
6.       Now, select Synchronise to Scan Engine as the Loop Timing Source. Click OK. This will cause the code in the timed loop to execute once, instantly after each I/O scan, assuring that any I/O values used in this timed loop are the most recent ones.
7.      To run synchronised to the scan engine, the step before constructed the timed loop. Now, by right-clicking on the CompactRIO real-time controller in the LabVIEW Project and picking Properties, one should configure the rate of the scan engine itself.
8.       Then, choose Scan Engine from the categories on the left and enter 100ms as the Scan Period and all the I/O in the CompactRIO system to be updated every 100ms (10Hz). From this page, the Network Publishing Period can also be set, which regulates how often the I/O values are published to the network for remote monitoring and debugging. After that, click OK.
9.       Now that one has constructed the I/O scan rate, it is time to add the I/O reads to the existing application for control. One can simply drag and drop the I/O variables from the LabVIEW Project to the RT block diagram when using CompactRIO Scan Mode. Expand the CompactRIO real-time controller, chassis, and the I/O module the one would like to log. By clicking on it, select AI0, then drag and drop it into the timed loop on your RT.vi diagram.
10.   Now, in this project for speciality digital Pulse Width Modulated output, one should configure the digital module so the one can use a PWM signal to control the imaginary heater unit. Right click on the existing digital module in the project and select Properties, to do this. Select Specialty Digital Configuration and a Speciality Mode of Pulse-Width Modulation in the C Series Module Properties dialogue. Speciality Digital mode allows the existing module to perform to pattern based digital I/O at rates significantly faster than is available with the scan interface. Click OK and the existing module will now be in PWM mode.
11.   Then a person is ready to add the actual PWM output to the block diagram. To do so, widen the Mod2 object in the project and drag and drop the PWM0 item to the block diagram as it has been done with the AI0 I/O node in the previous step.
12.   After that, somebody will want to join the PID control logic to this program. Right click the block diagram to open the functions palette and click on the Search button in the top right of the palette, if one wants to do such a thing.
13.   Scan for PID and pick PID.vi in the Control Design and Simulation Palette and drag it to the actual block diagram of the timed loop and wire the PID VI.
14.   The set point input is not wired now. That is because it is best practice to keep user interface (UI) objects out of actual high priority control loop. If someone wants to interact with and adjust the actual set point at the run time, the one will want to create a control that can be interacted with in the lower priority loop. Also, if someone wants to create single process shared variables for I/O in the already existing high priority control loop, two controls in our application (set point and stop) are needed to create two new single process shared variables.

A single process is created and the variable is shared by right click on the actual RT CompactRIO Target in the LabVIEW Project and New >> Library should be selected. Rename the library into something perceptive like RTComm. Then, one should right click on the new library and select New>>Variable. That will open the Shared Variable Properties dialogue. The variable should be named SetPoint (for example, the name depends on person’s imagination) and “Single Process” should be selected for the variable type in the Variable Type drop down box. Finally, click on the RT FIFO option in the left-hand tree and click the Enable RT FIFO check box.
15.   In the library that has just been created, another single-process shared variable should be made. This variable is for the Stop control that is going to be created that will stop the program when it is needed. All the same settings as the previous Set Point variable except for the type this new variable should possess, and it should be Boolean.
16.   Next, some user interface should be created. Such a thing is done in Slide control, Waveform Chart, Numeric control, and Stop (Boolean) control.
17.   This program is supposed to be finished now by creating a secondary (non-timed) loop for the actual UI objects and finishing wiring the existing block diagram.
18.   Note the extension of I/O to the configuration and shutdown states to ensure that already existing I/O is in a known state when the program begins and ends. The basic control application should be ready to run.

Thursday, 29 June 2017

Getting Started with CompactRIO - Performing Basic Control

logger software 

The National Instruments Compact

An advanced embedded data and control acquisition system created for applications that require high performance and reliability equals RIO programmable automation controller. The system has open, embedded architecture, extreme ruggedness, small size, and flexibility, that engineers and embedded planners can use with COTS hardware to instantly build systems that are custom embedded. NI CompactRIO is powered by National Instruments LabVIEW FPGA and LabVIEW Real-Time technologies, it gives engineers the ability to program, design, and customize the CompactRIO embedded system with handy graphical programming tools.
This controller fuses a high-performance FPGA, an embedded real-time processor, and hot-swappable I/O modules. Every I/O module that grants low-level customization of timing and I/O signal processing is directly connected to the FPGA. The embedded real-time processor and the FPGA are connected via a high-speed PCI bus. A low-cost architecture with direct access to low-level hardware assets is shown by this. LabVIEW consists of built-in data transfer mechanisms that pass data from both the FPGA and the I/O modules to the FPGA to the embedded processor for real-time post-processing, analysis, data logging, or communication to a networked host CPU.

FPGA

A reconfigurable, high-performance chip that engineers may program with LabVIEW FPGA tools is the installed FPGA. FPGA designers were compelled to learn and use complex design languages such as VHDL to program FPGAs, and now, any scientist or engineer can adapt graphical LabVIEW tools to personalize and program FPGAs. One can implement custom triggering, timing, control, synchronization, and signal processing for an analog and digital I/O by using the FPGA hardware installed in CompactRIO.

C Series I/O Modules

A diversity of I/O types are accessible including current, voltage, thermocouple, accelerometer, RTD, and strain gauge inputs; 12, 24, and 48 V industrial digital I/O; up to ±60 V simultaneous sampling analogue I/O; 5 V/TTL digital I/O; pulse generation; counter/timers; and high voltage/current relays. People can frequently connect wires directly from the C Series modules to their actuators and sensors, for the modules contain built-in signal conditioning for extended voltage ranges or industrial signal samples.

Weight and Size

Demanding design requirements in many embedded applications are size, weight, and I/O channel density. A four-slot reconfigurable installed system weighs just 1.58 kg (3.47 lb) and measures 179.6 by 88.1 by 88.1 mm (7.07 by 3.47 by 3.47 in.).



Tuesday, 30 May 2017

Introduction to RS232 Serial Communication - Part 1

Labview consultant
Serial communication is basically the transmission or reception of data one bit at a time. Today’s computers generally address data in bytes or some multiple thereof. A byte contains 8 bits. A bit is basically either a logical 1 or zero. Every character on this page is actually expressed internally as one byte. The serial port is used to convert each byte to a stream of ones and zeroes as well as to convert streams of ones and zeroes to bytes. The serial port contains an electronic chip called Universal Asynchronous Receiver/Transmitter (UART) that actually does the conversion.
The serial port has many pins. We will discuss the transmit and receive pin first. Electrically speaking, whenever the serial port sends a logical one (1) a negative voltage is effected on the transmit pin. Whenever the serial port sends a logical zero (0) a positive voltage is effected. When no data is being sent, the serial port’s transmit pin’s voltage is negative (1) and is said to be in the MARK state. Note that the serial port can also be forced to keep the transmit pin at a positive voltage (0) and is said to be the SPACE or BREAK state. (The terms MARK and SPACE are also used to simply denote a negative voltage (1) or a positive voltage(0) at the transmit pin respectively).
When transmitting a byte, the UART (serial port) first sends a START BIT which is a positive voltage (0), followed by the data (general 8 bits, but could be 5, 6, 7, or 8 bits) followed by one or two STOP BITs which is a negative(1) voltage. The sequence is repeated for each byte sent.
At this point, you may want to know what is the duration of a bit. In other words, how long does the signal stay in a particular state to define a bit? The answer is simple. It is dependent on the baud rate. The baud rate is the number of times the signal can switch states in one second. Therefore, if the line is operating at 9600 baud, the line can switch states 9,600 times per second. This means each bit has the duration of 1/9600 of a second or about 100 µsec.
When transmitting a character there are other characteristics other than the baud rate that must be known or that must be setup. These characteristics define the entire interpretation of the data stream.
The first characteristic is the length of the byte that will be transmitted. This length, in general, can be anywhere from 5 to 8 bits.
The second characteristic is parity. The parity characteristic can be even, odd, mark, space, or none. If even parity, then the last data bit transmitted will be a logical 1 if the data transmitted had an even amount of 0 bits. If odd parity, then the last data bit transmitted will be a logical 1 if the data transmitted had an odd amount of 0 bits. If MARK parity, then the last transmitted data bit will always be a logical 1. If SPACE parity, then the last transmitted data bit will always be a logical 0. If no parity then there is no parity bit transmitted.
A third characteristic is a number of stop bits. This value, in general, is 1 or 2.
Stay tuned for part two, it will be published soon.

Friday, 26 May 2017

Computerized Outputs

data logger
Digital Outputs require a similar investigation and large portions of indistinguishable contemplation from advanced data sources. These incorporate watchful thought of yield voltage go, greatest refresh rate, and most extreme drive current required. In any case, the yields likewise have various particular contemplations, as portrayed beneath. Relays have the benefit of high off impedance, low off spillage, low on resistance, irresoluteness amongst AC and DC flags, and implicit segregation. Be that as it may, they are mechanical gadgets and consequently give bring down unwavering quality and commonly slower reaction rates. Semi-conductor yields regularly have a favorable position in speed and unwavering quality.
Semiconductor changes additionally have a tendency to be littler than their mechanical reciprocals, so a semiconductor-based advanced yield gadget will commonly give more yields per unit volume. When utilizing DC semiconductor gadgets, be mindful so as to consider whether your framework requires the yield to sink or source current. To fulfill varying necessities,

Current Limiting/Fusing

Most yields, and especially those used to switch high streams (100 mA or something like that), offer some kind of yield security. There are three sorts most normally accessible. The first is a straightforward circuit. Cheap and dependable, the primary issue with circuits, is they can't be reset and should be supplanted when blown. The second sort of current constraining is given by a resettable breaker. Ordinarily, these gadgets are variable resistors. Once the current achieves a specific edge, their resistance starts to rise rapidly, at last constraining the current and stopping the current.
Once the culpable association is evacuated, the resettable circuit returns to its unique low impedance state. The third kind of limiter is a real current screen that turns the yield off if and when an overcurrent is recognized. This "controller" limiter has the upsides of not requiring substitution taking after an overcurrent occasion. Numerous usage of the controller setup additionally permits the overcurrent outing to be determined to a channel by channel premise, even with a solitary yield board.


Monday, 10 April 2017

“Other” types of DAQ I/O Hardware - Part 1

daq
This article portrays the "other normal" sorts of DAQ I/O — gadgets, for example, Analog Outputs, Digital Inputs, Digital Inputs, Counter/Timers, and Special DAQ capacities, which covers such gadgets as Motion I/O, Synchro/Resolvers, LVDT/RVDTs, String Pots, Quadrature Encoders, and ICP/IEPE Piezoelectric Crystal Controllers. It likewise covers such themes as interchanges interfaces, timing, and synchronization capacities.
Analog Outputs Analog or D/A yields are utilized for an assortment of purposes in data acquisition and control systems. To appropriately coordinate the D/A gadget to your application, it is important to consider an assortment of determinations, which are recorded and clarified beneath.

Number of Channels 

As it's a genuinely clear necessity, we won't invest much energy in it. Ensure you have enough yields to take care of business. On the off chance that it's conceivable that your application might be extended or adjusted, later on, you may wish to determine a system with a couple "safe" yields. In any event, make certain you can add yields to the system not far off without significant trouble.
Resolution As with A/D channels, the resolution of a D/A yield is a key particular. The resolution depicts the number or scope of various conceivable yield states (regularly voltages or streams) the system is equipped for giving. This detail is all around given as far as "bits", where the resolution is characterized as 2(# of bits). For instance, 8-bit resolution relates to a resolution of one section in 28 or 256. So also, 16-bit resolution relates to one section in 216 or 65, 536. At the point when joined with the yield go, the resolution decides how little an adjustment in the yield might be summoned. To decide the resolution, essentially separate the full-scale scope of the yield by its resolution. A 16-bit yield with a 0-10 Volt full-scale yield gives 10 V/216 or 152.6 microvolts resolution. A 12-bit yield with a 4-20 mA full scale gives 16 mA/212 or 3.906 uA resolution.

Accuracy 

Despite the fact that precision is frequently compared to resolution, they are not the same. An analog yield with a one microvolt resolution doesn't really (or even regularly) mean the yield is precise to one microvolt resolution. Outside of sound yields, D/A system precision is commonly on the request of a couple LSBs. Be that as it may, it is critical to check this detail as not all analog yield systems are made equivalent. The most noteworthy and basic error commitments in analog yield systems are Offset, Gain/Reference, and Linearity errors.

Tuesday, 21 March 2017

Be Careful With Registrations

Labview projects
We found a memory development in their application which utilized client occasions for interprocess correspondence. The issue we found was that any client occasions which are enrolled however unhandled by an occasion structure will expand your application's memory use when produced.
A fundamentally the same as the issue was raised at the 2011 CLA summit that produced CAR 288741 (settled for LabVIEW 2013). This CAR was recorded in light of the fact that unhandled enlisted occasions really reset the timeout in occasion structures. There was a great deal of good dialog over at LAVA with clients estimating approaches to utilize this new component however what I didn't see raised anytime was the way that producing client occasions which are not taken care of in an occasion structure will bring about a memory development in your application notwithstanding resetting the occasion timeout.
From my understanding, we see this conduct on the grounds that the enlist occasions hub will make a post box for occasions to be placed in but since there is not a case in the occasion structure to deal with this particular occasion, it is never removed from the letter box. This will prompt an expansion in the application's memory each time that occasion is produced. I have backpedaled and forward between this being normal conduct and a bug. At the season of composing this I trust it not out of the ordinary conduct yet there are sure things that are either inconsistencies in LabVIEW or demonstrate my misconception of how LabVIEW occasions function.
One of these irregularities and a reason this issue can be so hard to find is the way unhandled occasions are shown in the Event Inspector Window.
The issue I have is that albeit "Some Event" is not dealt with in the occasion structure, it doesn't appear in the rundown of Unhandled Events in Event Queue(s). Curiously, the occasion shows up in the occasion log with the occasion kind of "Client Event (unhandled)" which implies LabVIEW knows the occasion is not taken care of in this specific occurrence but rather still keeps it in the post box. What is confounding, to me in any event, is that despite the fact that nothing appears in the occasion monitor's rundown of unhandled occasions, flushing the occasion line discards these occasions (additionally counteracting memory development).

Monday, 20 February 2017

Common Problems with LabVIEW Real-time Module: Part 2

labview expert
The second part of our series will address the difficulty with setting up a connection with a Compact Field Point and SQL Server 2000.
Let’s set up a possible scenario:
You have a SQL server with which you would like to communicate with directly (preferably no software in between).).
There is more than two way to try and solve this problem, but we’ve narrowed them down to two that are most likely to be a perfect solution:

1.  FTP files using cFP onto IIS FTP server (push data, then DTS).

This should be fairly easy to accomplish. As an alternative, you can write a LabVIEW app for your host computer (SQL Server Computer) that uses the Internet Toolkit to FTP the files off the cFP module, and writes the data from the file into the SQL Server using the SQL Toolkit. As another alternative, you can use DataSockets to read the file via FTP, parse it, and write the data to the SQL Server using the SQL Toolkit.

2. Write a custom driver/protocol (which will run on the cFP)

You can accomplish this, however, it is a subject to some limitations/difficulties One approach would be a modification of the first solution, where you create a host-side LabVIEW program that communicates with the cFP controller via a custom TCP protocol that you implement to retrieve data at specified intervals and log the data directly to the database.
How do you like solutions are LabVIEW experts are providing? Are you working on a LabVIEW project at the moment? Let us know in the comments.