Showing posts with label automation. Show all posts
Showing posts with label automation. Show all posts

Monday, 18 December 2017

Engineers Turn to Automated Test Equipment to Save Time

http://www.readydaq.com/content/blog/engineers-turn-automated-test-equipment-save-time
With engineers rushing tests in order to hit tight product deadlines, the market for test equipment that automatically detects faults in semiconductors and other components is growing.
Setting aside time for testing has been a struggle for electrical engineers. The shrinking size - and increasing complexity - of semiconductor circuits is not making life any easier. Nearly 15% of wireless engineers are outsourcing final testing and more than 45% contract manufacturing – when most semiconductor testing takes place.
Almost 65% of the survey respondents said that testing is still a challenge in terms of time consumption. New chips designed for tiny connected sensors and autonomous cars also require rigorous testing to ensure reliability.
Tight deadlines for delivering new products is forcing engineers toward using automated test equipment, also known as ATE, to quickly identify defects in semiconductors, especially those used in smartphones, communication devices, and consumer electronics.
The global automated test equipment market is estimated to reach $4.36 billion in 2018, up from $3.54 billion in 2011, according to Transparency Market Research, a technology research firm.
Automated test equipment is used extensively in semiconductor manufacturing, where integrated circuits on a silicon chip must be tested before it is prepared for packaging. It cuts down on the time it takes to test more complex chips, which are incorporating higher speeds, performance, and pin counts. Automatic testing also helps to locate flaws in system-on-chips, or SoCs, which often contain analog, mixed-signal, and wireless parts on the same silicon chip.


Saturday, 16 December 2017

Semiconductor Testing


http://www.readydaq.com/content/blog/semiconductor-testing

Automated test equipment (ATE) is computer-controlled test and measurement equipment that allows for testing with minimal human interaction. The tested devices are referred to as a device under test (DUT). The advantages of this kind of testing include reducing testing time, repeatability, and cost efficiency in high volume. The chief disadvantages are the upfront costs of programming and setup.
Automated test equipment can test printed circuit boards, interconnections, and verifications. They are commonly used in wireless communication and radar. Simple ATEs include volt-ohm meters that measure resistance and voltages in PCs; complex ATE systems have several mechanisms that automatically run high-level electronic diagnostics.
ATE is used to quickly confirm whether a DUT works and to find defects. When the first out-of-tolerance value is detected, the testing stops and the device fails.

Semiconductor Testing

For ATEs that test semiconductors, the architecture consists of a master controller (a computer) that synchronizes one or more sources and capture instruments, such as an industrial PC or mass interconnect. The DUT is physically connected to the ATE by a machine called a handler, or prober, and through a customized Interface Test Adapter (ITA) that adapts the ATE's resources to the DUT.
When testing packaged parts or directly on the silicon wafer, a handler is used to place the device on a customized interface board and silicon wafers are tested directly with high precision probes.

Test Types

Logic Testing

Logic test systems are designed to test microprocessors, gate arrays, ASICs and other logic devices.
Linear or mixed signal equipment tests components such as analog-to-digital converters (ADCs), digital-to-analog converters (DACs), comparators, track-and-hold amplifiers, and video products. These components incorporate features such as, audio interfaces, signal processing functions, and high-speed transceivers.
Passive component ATEs test passive components including capacitors, resistors, inductors, etc. Typically, testing is done by the application of a test current.
Discrete ATEs test active components including transistors, diodes, MOSFETs, regulators, TRIACS, Zeners, SCRs, and JFETs.

Printed Circuit Board Testing

Printed circuit board testers include manufacturing defect analyzers, in-circuit testers, and functional analyzers.
Automated Test Equipment imageManufacturing defect analyzers (MDAs) detect manufacturing defects, such as shorts and missing components, but can't test digital ICs as they test with the DUT powered down (cold). As a result, they assume the ICs are functional. MDAs are much less expensive than other test options and are also referred to as analog circuit testers.
In-circuit analyzers test components that are part of a board assembly. The components under test are "in a circuit." The DUT is powered up (hot). In-circuit testers are very powerful but are limited due to the high density of tracks and components in most current designs. The pins for contact must be placed very accurately in order to make good contact. They are also referred to as digital circuit testers or ICT.
A functional test simulates an operating environment and tests a board against its functional specification. Functional automatic test equipment (FATE) unpopular due to the equipment not being able to keep up with the increasing speed of boards. This causes a lag between the board under test and the manufacturing process. There are several types of functional test equipment and they may also be referred to as emulators.

Interconnection and Verification Testing

Test types for interconnection and verification include cable and harness testers and bare-board testers.
Cable and harness testers are used to detect opens (missing connections), shorts (open connections) and miswires (wrong pins) on cable harnesses, distribution panels, wiring looms, flexible circuits, and membrane switch panels with commonly-used connector configurations. Other tests performed by automated test equipment include resistance and hipot tests.
Bare board automated test equipment is used to detect the completeness of a PCB circuit before assembly and wave solder.

Wednesday, 6 December 2017

LabVIEW Improvements


labview developers

LabVIEW passed its 30 year anniversary in 2016,  and six months ago, National Instruments, has launched a considerably updated version of LabVIEW - its Next Generation LabVIEW NXG 1.0.
LabVIEW NXG is a totally reworked version of LabVIEW and this enables it to offer a considerably improved level of performance. By adopting an approach where LabVIEW has been started again from the ground up, LabVIEW NXG enables users to see significant improvements in performance as a result of the new code.
LabVIEW NXG offers some significant definitive improvements over the previous implementation of LabVIEW:
  • Plug & Play: a lot of work has gone into enabling LabVIEW NXG to provide easy set-up of hardware connections. It has true plug and play functionality.
  • IDE: The LabVIEW NXG environment has been totally overhauled to take elements of popular commercial software and replicate the attributes of the environment to make it more intuitive.
  • Tutorials: To facilitate the speedy uptake of newcomers to LabVIEW, the new LabVIEW NXG has inbuilt walk-throughs and other integrated learning facilities. This has been shown to greatly speed up the time which it takes for newcomers to be able to proficiently programme in LabVIEW. It is even possible to undertake a number of standard tasks without “hitting the code.”
National Instruments will be running both the traditional LabVIEW, i.e. LabVIEW 2017 which has also been launched alongside the new next-generation LabVIEW NXG, but ultimately when total compatibility has been established the two will converge enabling users to benefit from the new streamlined core.
Users of LabVIEW will be given access to both LabVIEW 2017 and later versions as well as LabVIEW NXG. In this way, they can make the choice of which version suits their application best.
National Instruments spokespeople stressed that the traditional development line of LabVIEW will continue to be maintained so that the large investment in software and applications that users have is not at risk. However, drivers and many other areas are already compatible with both lines.
“Thirty years ago, we released the original version of LabVIEW, designed to help engineers automate their measurement systems without having to learn the esoterica of traditional programming languages. LabVIEW was the ‘nonprogramming’ way to automate a measurement system,” said Jeff Kodosky, NI co-founder and business and technology fellow, known as the ‘Father of LabVIEW.’
“For a long time, we focused on making additional things possible with LabVIEW, rather than furthering the goal of helping engineers automate measurements quickly and easily. Now we are squarely addressing this with the introduction of LabVIEW NXG, which we designed from the ground up to embrace a streamlined workflow. Common applications can use a simple configuration-based approach, while more complex applications can use the full open-ended graphical programming capability of the LabVIEW language, G.”

Monday, 20 November 2017

9 Things to Consider When Choosing Automated Test Equipment


automation

Automated test equipment (ATE) have the ability to reduce the costs of testing and make sure that lab teams can focus on other, more important tasks. With ATE, productivity, and efficiency is boosted to a maximum level due to cutting out the unnecessary tasks and daily activities.
However, you should not just cash out and invest in automated test equipment, there are elements that factors that are important to find the system that suits you best. Our team at ReadyDAQ has prepared 12 things you should consider before choosing automated test equipment.

1. Endurance and Compactness

One of the most important things is that the ATE system your company picks is designed for optimal performance over the long-term. Take a careful look at connections and components and make a conclusion whether they will survive over repeated use.Many lab teams are often struggling to find large areas for their testing operations. The automated test equipment should also be compact.

2. Customer Experience

Are other customers satisfied the support and other things they went through? Does the company you bought ATE from provide full support? You don't have to be the expert in automated test equipment, but they do. And their skills and expertise have to be available to you for when you need it. Customer support and the overall customer experience is a huge factor!

3. Scalability and Compatibility

One purchase does not have to be final. It often isn't You should check whether the equipment you ordered can be expanded or scaled over time. Your needs might change and you want ATE to adapt to your needs.
When compatibility comes to mind, we want to make sure that the equipment is built following all industry standards. Cross-compatibility is often important in situations where we no longer need or have lost the access to certain products. Better safe than sorry.

4. Comprehensive

Think of all the elements needed for testing. Even better, make a list. Does the equipment you have in mind cover ALL required elements? Don't forget about power and signaling, are they included too?

5. High Test Coverage and Diagnostics 

The ATE system has to be able to provide full coverage and give insights on all components of the processed product. This can help decrease the number of possible errors and failures later on.
How about diagnostics? Does the testing system provide robust diagnostic tools to make sure the obtained results are reliable and true?

6. Cost per Test

How much does a single test cost? You have to think and plan long-term, so a single test cost can help you calculate and make an assumption whether the system provides real value for the money invested.

7. Testimonials and Warranty 

Are other customers satisfied? Can the company direct you to testimonials from previous customers? What do their previous customers have to say about the systems and their performance?
Also, you don't want to be left hanging in case the systems starts malfunctioning or simply stops working. Does the ATE system come with a comprehensive warranty? Make sure you’re protected against damages that might happen in testing and see that the warranty covers that too.

8. Manufacturer Reputation

When did you first hear about the company? How? Did someone (besides them) say anything good about them? Is the company known for the high quality of their equipment? Discuss their past projects and learn more about the value their products provide.

9. Intuitive Performance

At first sight, is the system easy to use or way too complicated that it would require weeks of training for everyone in the lab? Does it offer intuitive performance within the testing procedure? Your team should be able to begin testing without having to go over every point in the technical manual in pinpoint detail.
Our team at ReadyDAQ is here to help you select the perfect automated test equipment for your lab.

Tuesday, 29 August 2017

IoT: Security and Privacy


data logger

Two key IoT issues, which are also intertwined, are security and privacy: the data IoT devices store and work with needs to be safe from hackers, so as not to have sensitive data exposed to third parties. It is of utmost importance that IoT devices be secure from vulnerabilities and private so that users would feel safe in their surroundings and trust that their data shall not be exposed or worse, sold through illegal channels. Also, since devices are becoming more and more integrated into our everyday lives (many people store their credentials on their devices, for example), poorly secured devices can serve as entry points for cyber-attacks and leave data unprotected.
 
The nature of IoT devices means that every unsecured or inadequately secured devices pose a potential threat. This security problem is even deeper since various devices can connect to each other automatically, thus refraining the user from knowing at first glance whether a security issue exists. Therefore, developers and users of IoT devices have an obligation to make sure that no other devices come in any potential harm, so they constantly develop and test security solutions for these challenges.
 
The second key issue, privacy, is thought to be a factor which holds back the full development and implementation of IoT. Many users are concerned about their rights when it comes to their data being tracked, collected and analyzed. IoT also raises concerns regarding the potential threat of being tracked, the inability of discarding certain data collection, surveillance etc. Strategies need to be implemented which, although bring innovation, still respect user privacy choices and expectations. In order for Internet of Things to truly be accepted, these challenges need to be looked into and these problems need to be overcome, which is a great task and a test both for developers and for users.

Saturday, 26 August 2017

IoT: Summary

data logging
The Internet of Things (or shortened ‘IoT’) is a hot topic in today’s world which carries extraordinary significance in socio-economic and technical aspects of everyday life. Products designed for consumers, long-lasting goods, automobiles and other vehicles, sensors, utilities and other everyday objects are able to become connected among themselves through the Internet and strong data analytic capabilities and therefore transform our surroundings. Internet of Things is forecast to have an enormous impact on the economy; some analysts anticipate almost 100 billion interconnected IoT devices. On the other hand, other analysts proclaim that IoT devices shall input into the global economy more than $11 trillion by 2025.
However, the Internet of Things comes with many important challenges which, if not overcome, could diminish or even put a stop to the progress of it thus failing to realize all its potential advantages. One of the greatest challenges is security: the newspapers are filled with headlines alerting the public to the dangers of hacking internet-connected devices, identity theft and privacy intrusion. These technical and security challenges remain and are constantly changing and developing; at the same time, new legal policies are emerging.
This document’s purpose is to help the Internet Society community find their way in the discourse about the Internet of Things regarding its pitfalls, shortcomings and promises.
Many broad ideas and complex thoughts surround the Internet of Things and in order to find one’s way, the key concepts that should be looked into as they represent the foundation of circumstances and problems of IoT are:
- Transformational Potential: If IoT takes off, a potential outcome of it would be a ‘hyperconnected world’ where limitations on applications or services that use technology cease to exist.
- IoT Definitions: although there is not one universal definition, the term Internet of Things basically refers to several connected objects, sensors or items (not considered computers) which create, exchange and control data with next to none human intervention.
- Enabling Technologies: Cloud computing, data analytics, connectivity and networking all lead to the ability to combine and interconnect computers, sensors and networks all in order to control other devices.
- Connectivity Models: There are four common communication models and are as following: Device-to-Device, Device-to-Cloud, Device-to-Gateway, and finally Back-End Data-Sharing. These models show how flexible IoT devices can be when connecting and when providing value to their respective users.

Thursday, 24 August 2017

What is RS-232, what is RS-422, and what is RS-485?

automation
RS-232, RS-422 and RS-485 are serial connections which can be found in various consumer electronics devices. Namely, RS-232 (ANSI/EIA-232 Standard) is the serial connection which can be historically found on IBM-compatible PCs. It is employed in many different scenarios and for many purposes, such as connecting a mouse, a printer, or a modem, as well as connecting different industrial instrumentation. Due to improvements in line drivers and cables, applications often expand the performance of RS-232 beyond the distance and speed limits which are listed in the standard. RS-232 is restricted to point-to-point connections between PC serial ports and various other devices. RS-232 hardware can be employed for serial communication up to distances of 50 feet.
On the other hand, RS-422 (EIA RS-422-A Standard) is the serial connection which can be historically found on Apple Macintosh computers. RS-422 employs a differential electrical signal, as opposed to unbalanced signals referenced to ground with the RS-232. Differential transmission employs two lines each for transmitting and receiving signals which lead to greater immunity to noise and the signal can travel longer distances as compared to the RS-232. These advantages make RS-422 a better option to consider for industrial applications.
Finally, RS-485 (EIA-485 Standard) is an improvement over RS-422, because it increases the number of devices from 10 to 32 and defines the electrical features necessary to safeguard adequate signal voltages under maximum capacity. With this enhanced multi-drop capability, one is able to create networks of devices connected to a single RS-485 serial port. The noise immunity and multi-drop capability make RS-485 the serial connection of choice in industrial applications requiring many distributed devices networked to a PC or other controller for data collection, HMI, or other operations. RS-485 is a superset of RS-422; therefore, all RS-422 devices can be controlled by RS-485. RS-485 hardware can be employed for serial communication with up to 4000 feet of cable network.

Tuesday, 22 August 2017

Requirements of real time control

automation
Real time embedded control processors are individual computing units which have been implemented into pieces of larger and far more complicated equipment such as vehicles of all sort (trucks, airplanes, boats, yachts etc.), then other computer peripherals, audio systems and military equipment and weapons. The control processors are said to be embedded because they are integrated into a piece of equipment which is not in itself considered a computer nor does it execute some computing functions.

Requirements of real time control

Whether they are invisible or visible to the user, the real-time control processors are nowadays widespread and incorporated into people’s daily life and actions. For example, an invisible real-time control processor can be found in vehicles: this is the ABS (automatic braking system) which holds the vehicle steady on the road and prevents it from skidding on the road. Also, a real-time control processor can be used to replace high cost, high maintenance and bulky components of a given system, while at the same time providing better functions at a lower expense. In other certain occurrences, the presence of a real-time control processor may be visible, for example, an autopilot on an aircraft. But in all aforementioned cases, this real-time control processor is still a part of a larger system. And because of the fact that it is a component of a greater system, and that system has its own requirements and operating capabilities, most of these systems limit the processor in regard to its size, then weight, cost, power or reliability. Simultaneously though, the real-time control processor is bound to deliver top performance, for these real time events are mostly external inputs to the system which is in need of a response within milliseconds. If the processor fails to deliver a response in such short time span, disaster may strike: the autopilot may not change the course of the aircraft accordingly and may misinform the pilot about altitude.

Wednesday, 19 July 2017

How to keep multicloud complexity under control



Using multiple cloud providers provides needed flexibility, but it also multiplies the work and risk of getting out of sync
“Multicloud” means that you use multiple public cloud providers, such as Google and Amazon Web Services, AWS and Microsoft, or all three—you get the idea. Although this seems to provide the best flexibility, there are trade-offs to consider.
The drawbacks I see at enterprise clients relate to added complexity. Dealing with multiple cloud providers does give you a choice of storage and compute solutions, but you must still deal with two or more clouds, two or more companies, two or more security systems … basically, two or more ways of doing anything. It quickly can get confusing.
For example, one client confused security systems and thus inadvertently left portions of its database open to attack. It’s like locking the back door of your house but leaving the front door wide open. In another case, storage was allocated on two clouds at once, when only one was needed. The client did not find out until a very large bill arrived at the end of the month.
Part of the problem is that public cloud providers are not built to work together. Although they won’t push back if you want to use public clouds other than their own, they don’t actively support this usage pattern. Therefore, you must come up with your own approaches, management technology, and cost accounting.
The good news is that there are ways to reduce the multicloud burden.
For one, managed services providers (MSPs) can manage your multicloud deployments for you. They provide gateways to public clouds and out-of-the-box solutions for management, cost accounting, governance, and security. They will also be happy to take your money to host your applications, as well as provide access to public cloud services.
If you lean more toward the DIY approach, you can use cloud management platforms (CMPs). These place a layer of abstraction between you and the complexity of managing multiple public clouds. As a result, you use a single mechanism to provision storage and compute, as well as for security and management no matter how many clouds you are using.
I remain a fan of the multicloud approach. But you’ll get its best advantage if you understand the added complexity up front and the ways to reduce it.

6 Steps on How to Learn or Teach LabVIEW OOP - Part 2

Labview

Step 4 – Practice!
This stage is harder than the last. You need to make sure:
Each child class should exactly reflect the abstract methods. If your calling code ever cares which sub-class it is calling by using strange parameters or converting the type then you are violating LSP – the Liskov substitution principle – The L of solid.
Each child class should have something relevant to do in the abstract classes. If it has methods that make no sense this is a violation of the interface segregation principle.
Step 5 – Finish SOLID
Read about the open-closed principle and the dependency inversion principle and try it in a couple of sections of code.
Open-closed basically means that you leave interfaces (abstract classes in LabVIEW). Then you can change the behavior by creating a new child class (open for extension) without have to modify the original code (closed to modification). This goes well with the dependency inversion principle. This says that higher level classes should depend only on interfaces (again abstract classes). This means the lower level code implements these classes and so the high-level code can call the lower level code without a direct dependency.
This goes well with the dependency inversion principle. This says that higher level classes should depend only on interfaces (again abstract classes). This means the lower level code implements these classes and so the high-level code can call the lower level code without a direct dependency. This can help in places where coupling is difficult to design out.
I leave these principles to the end because I think they are the easiest to write difficult to read code. I’m still trying to find a balance with these – following them wholeheartedly creates lots of indirection which can affect readability. I also think we don’t get as much benefit in LabVIEW with these since we don’t tend to package code within projects in the same way as other languages. (this maybe a good topic for another post!)
Step 6 – Learn some design patterns
This was obviously part of the point of this article. When I came back to design patterns after understanding design better and the SOLID principles it allowed me to look at the patterns in a different way. I could relate them to the principles and I understood what problems they solved.
For example, the command pattern (where you effectively have a QMH which takes message classes) is a perfect example of a solution to meet the open-closed principle for an entire process. You can extend the message handler by adding support for new message types by creating new message classes instead of modifying the original code. This is how the actor framework works and has allowed the developers to create a framework where they have a reliable implementation of control of the actors but you can still add messages to define the functionality.
Once you understand why these design patterns exist you can then apply some critical thinking about why and when to use them. I personally dislike the command pattern in LabVIEW because I don’t think the additional overhead of a large number of message classes is worth the benefit of being able to add messages to a QMH without changing the original code.
I think this will help you to use them more effectively and are less likely to end up with a spaghetti of design patterns thrown together because that is what everyone was talking about.
Urmm… so what do I do?
So I know this doesn’t have the information you need to actually do this so much as set out a program. Actually, all the steps still follow the NI course on OOP so you could simply self-pace this for general learning material.

Thursday, 13 July 2017

3 Reasons to Automate your Business

automation
Let's accept the fact that with every technological development that takes place, it's major focus is improved efficiency, cost cutting and better output. This is the major reason we are focused on the implementation of latest solutions for our businesses. An increasingly popular term that we come across is automation. And rightly so, it is the thing of the future which is slowly connecting all the aspects of industrialization. The process of automation, although a long one, can be easily implemented using the perfect blend of software and hardware. But what is it that makes automation our priority? Let's have a look at the three most important reasons behind this transition.
Filling the gap between supply and demand: We have to agree that with the ever increasing population, all the industries are always under the pressure of fulfilling high demand numbers. To tackle this problem, automation is an absolute necessity since it has helped increase the output multifold. This increase in the produce has also led to lesser wastage and optimum efficiency.
 Accuracy: Okay, let's just accept the fact that machine made material is better and precise when compared to the human hand. While more and more industries are making the shift to the automation technology, it is to be noted that their output has increased when compared to human support.
Cost cutting leads to increased efficiency: An automatic machine equals a hundred men. Well, even though this number might be accurate it is safe to say that a machine can give output which equals a lot of manpower. This not only saves money due to less investment in terms of salaries but also saves production time. Testing is easier and simplifies the production process.
So when we look at these factors, we realize how important automation actually is. But, as we mentioned before, complementing software is very important for such hardware and that is where ReadyDAQ jumps in. High end machinery makes use of a lot of operational devices and so ReadyDAQ offers a development solution for all its software needs without actually having to start from the scratch. Supporting simultaneous operation of multiple devices, it is the perfect solution for all industries trying to implement automation and it's components. So, what are you waiting for? Download the 30- day trial version today and get a feel of the product before making that purchase!

Monday, 3 July 2017

CompactRIO Scan Mode Tutorial

Labview projects
This section will teach a person how to create a basic control application on CompactRIO using scan mode. One should see the LabVIEW FPGA Tutorial if the choice is to use the LabVIEW FPGA Interface. One should then have a new LabVIEW Project that consists of the existing CompactRIO system, including the controller, C Series I/O modules, and chassis. An NI 9211 Thermocouple input module will be used in this tutorial; nonetheless, for any analogue input module, the process can be followed.
1.       The project is saved by selecting File»Save and entering Basic control with scan mode. Click OK.
2.       This project will only consist of one VI, which is the LabVIEW Real-Time application that runs installed on the CompactRIO controller. Right-clicking on the CompactRIO real-time controller in the project and selecting New»VI saves the VI as RT.vi.This one is created by the VI.
3.       Three routines are included in the key operation of this application: start up, run, and shutdown. An effortless way to accomplish this order of operation is a flat sequence structure. Place with three frames on the existing RT.vi block diagram a flat sequence structure.
4.       Then, a timed loop to the Run frame of the sequence structure should be inserted. The capability to synchronise code to various time basis, including the NI Scan Engine that reads and writes scan mode I/O is provided by timed loops.
5.       If the timed loop is to be configured, one should double-click on the clock icon on the left input node.
6.       Now, select Synchronise to Scan Engine as the Loop Timing Source. Click OK. This will cause the code in the timed loop to execute once, instantly after each I/O scan, assuring that any I/O values used in this timed loop are the most recent ones.
7.      To run synchronised to the scan engine, the step before constructed the timed loop. Now, by right-clicking on the CompactRIO real-time controller in the LabVIEW Project and picking Properties, one should configure the rate of the scan engine itself.
8.       Then, choose Scan Engine from the categories on the left and enter 100ms as the Scan Period and all the I/O in the CompactRIO system to be updated every 100ms (10Hz). From this page, the Network Publishing Period can also be set, which regulates how often the I/O values are published to the network for remote monitoring and debugging. After that, click OK.
9.       Now that one has constructed the I/O scan rate, it is time to add the I/O reads to the existing application for control. One can simply drag and drop the I/O variables from the LabVIEW Project to the RT block diagram when using CompactRIO Scan Mode. Expand the CompactRIO real-time controller, chassis, and the I/O module the one would like to log. By clicking on it, select AI0, then drag and drop it into the timed loop on your RT.vi diagram.
10.   Now, in this project for speciality digital Pulse Width Modulated output, one should configure the digital module so the one can use a PWM signal to control the imaginary heater unit. Right click on the existing digital module in the project and select Properties, to do this. Select Specialty Digital Configuration and a Speciality Mode of Pulse-Width Modulation in the C Series Module Properties dialogue. Speciality Digital mode allows the existing module to perform to pattern based digital I/O at rates significantly faster than is available with the scan interface. Click OK and the existing module will now be in PWM mode.
11.   Then a person is ready to add the actual PWM output to the block diagram. To do so, widen the Mod2 object in the project and drag and drop the PWM0 item to the block diagram as it has been done with the AI0 I/O node in the previous step.
12.   After that, somebody will want to join the PID control logic to this program. Right click the block diagram to open the functions palette and click on the Search button in the top right of the palette, if one wants to do such a thing.
13.   Scan for PID and pick PID.vi in the Control Design and Simulation Palette and drag it to the actual block diagram of the timed loop and wire the PID VI.
14.   The set point input is not wired now. That is because it is best practice to keep user interface (UI) objects out of actual high priority control loop. If someone wants to interact with and adjust the actual set point at the run time, the one will want to create a control that can be interacted with in the lower priority loop. Also, if someone wants to create single process shared variables for I/O in the already existing high priority control loop, two controls in our application (set point and stop) are needed to create two new single process shared variables.

A single process is created and the variable is shared by right click on the actual RT CompactRIO Target in the LabVIEW Project and New >> Library should be selected. Rename the library into something perceptive like RTComm. Then, one should right click on the new library and select New>>Variable. That will open the Shared Variable Properties dialogue. The variable should be named SetPoint (for example, the name depends on person’s imagination) and “Single Process” should be selected for the variable type in the Variable Type drop down box. Finally, click on the RT FIFO option in the left-hand tree and click the Enable RT FIFO check box.
15.   In the library that has just been created, another single-process shared variable should be made. This variable is for the Stop control that is going to be created that will stop the program when it is needed. All the same settings as the previous Set Point variable except for the type this new variable should possess, and it should be Boolean.
16.   Next, some user interface should be created. Such a thing is done in Slide control, Waveform Chart, Numeric control, and Stop (Boolean) control.
17.   This program is supposed to be finished now by creating a secondary (non-timed) loop for the actual UI objects and finishing wiring the existing block diagram.
18.   Note the extension of I/O to the configuration and shutdown states to ensure that already existing I/O is in a known state when the program begins and ends. The basic control application should be ready to run.

Thursday, 29 June 2017

Getting Started with CompactRIO - Performing Basic Control

logger software 

The National Instruments Compact

An advanced embedded data and control acquisition system created for applications that require high performance and reliability equals RIO programmable automation controller. The system has open, embedded architecture, extreme ruggedness, small size, and flexibility, that engineers and embedded planners can use with COTS hardware to instantly build systems that are custom embedded. NI CompactRIO is powered by National Instruments LabVIEW FPGA and LabVIEW Real-Time technologies, it gives engineers the ability to program, design, and customize the CompactRIO embedded system with handy graphical programming tools.
This controller fuses a high-performance FPGA, an embedded real-time processor, and hot-swappable I/O modules. Every I/O module that grants low-level customization of timing and I/O signal processing is directly connected to the FPGA. The embedded real-time processor and the FPGA are connected via a high-speed PCI bus. A low-cost architecture with direct access to low-level hardware assets is shown by this. LabVIEW consists of built-in data transfer mechanisms that pass data from both the FPGA and the I/O modules to the FPGA to the embedded processor for real-time post-processing, analysis, data logging, or communication to a networked host CPU.

FPGA

A reconfigurable, high-performance chip that engineers may program with LabVIEW FPGA tools is the installed FPGA. FPGA designers were compelled to learn and use complex design languages such as VHDL to program FPGAs, and now, any scientist or engineer can adapt graphical LabVIEW tools to personalize and program FPGAs. One can implement custom triggering, timing, control, synchronization, and signal processing for an analog and digital I/O by using the FPGA hardware installed in CompactRIO.

C Series I/O Modules

A diversity of I/O types are accessible including current, voltage, thermocouple, accelerometer, RTD, and strain gauge inputs; 12, 24, and 48 V industrial digital I/O; up to ±60 V simultaneous sampling analogue I/O; 5 V/TTL digital I/O; pulse generation; counter/timers; and high voltage/current relays. People can frequently connect wires directly from the C Series modules to their actuators and sensors, for the modules contain built-in signal conditioning for extended voltage ranges or industrial signal samples.

Weight and Size

Demanding design requirements in many embedded applications are size, weight, and I/O channel density. A four-slot reconfigurable installed system weighs just 1.58 kg (3.47 lb) and measures 179.6 by 88.1 by 88.1 mm (7.07 by 3.47 by 3.47 in.).



Friday, 12 May 2017

Do you know about ARINC-429?

Daq
ARINC-429 is the aeronautics interface utilized by all business air ship (however 429 is not the essential interface on the Boeing 777 and 787 and the Airbus A-380). It is utilized for everything from conveying between different complex frameworks, for example, flight executives and autopilots and in addition to observing more short-sighted gadgets, for example, velocity sensors or fold position pointers.
In test frameworks, it's frequently basic to organize information from ARINC-429 gadgets with more regular DAQ gadgets, for example, weight sensors and strain gages. When examining stress put on a wing fight, you'd positively jump at the chance to have the capacity to facilitate the anxiety comes about with so many parameters as velocity, elevation, and any turn or climb/plummet incited g-strengths.
While the ARINC-429 transport is all around characterized, PC-based interfaces for the 429 transport are altogether different. The 429 transport characterizes usefulness as far as names, with each name speaking to an alternate parameter. It's essential for the information procurement framework to have the capacity to separate between the names. On the off chance that your framework is just keen on velocity, you need to disregard different parameters. Take note of that some ARINC-429 interfaces enable you to make these determinations in interface equipment, while others put the weight of exertion on the product.
Numerous ARINC-429 gadgets keep running on a complete calendar. For instance, the attractive heading might be transmitted each 200 mS. Some ARINC interfaces rely on programming based planning while others incorporate the booking with an FPGA in the equipment. The more elements and parameters a given ARINC interface incorporates with equipment the better, as you might rely on those valuable host CPU cycles for different things.

Sunday, 30 April 2017

Data Acquisition Modules

temperature data logger

GSM/GPRS Module 

For long range, remote duplex information correspondence, General Packet Radio Service (GPRS) correspondence is reasonable possibility to execute the assignment. The information bundle will send over GPRS and transferred into information stockpiling cloud. A quad-band 850/900/1800/1900 MHz GSM module is played out the information transmitter. The GSM module implanted with TCP stack which permits information transfer into a web server. With existing telco arrange benefit on wide spreading the system zones and satellite correspondence, the information could be sent and got the most area on the globe.

Temperature and humidity sensor 

DHT11 is a negligible exertion temperature and dampness sensor that contains an adjusted propelled hail yield for temperature and wetness. The sensor has ±5% accuracy for 20-80% sogginess range and ±2ºC precision for temperature from 0-50ºC. 5 VDC required working this sensor.

Analog Voltage Divider 

The Analog Voltage Divider V2 ready to perceive voltage from 0.0245V to 25V. The module talks with the microcontroller by methods for ADC channel, which serves 10-bit ADC. The sensor relies on upon the possibility of voltage divider, which the think voltage is scale diminished by five times, changed over into electronic examining, extend more than 1024 (10-bit) and times with most extraordinary 25V as perform in condition
To begin with, the GSM/GPRS module will attempt to develop the relationship with GPRS arrange. Once GPRS affiliation is set up, the GSM/GPRS module will affirm the developed relationship by tolerating the framework information of the banner quality and framework selection through the UI window. By then, consistent clock (RTC) module is gotten to through I2C interface to get the present time. The SD card is gotten to by methods for SPI interface and another CSV spreadsheet archive is made. In the event that the record presently exists, the data inside the report will be overwritten.
Each of the five sensors started taking estimation on the parameter; temperature and stickiness watched equipment voltage and current and the wonder contraption voltage and current. The recorded estimations are saved in the CSV report close by date and time stamp. The recorded estimation will be sent over GPRS relationship with the remote checking database. In case the data sending crash and burn, the microcontroller will retransmit the data to the remote watching server. The remote watching database server will have gotten the procured data and demonstrate these data in like the way in perspective of the parameter consigned. The site is open gotten to, however, the data from the remote watching database can be gotten to by the affirmed customer in a manner of speaking.

Monday, 17 April 2017

Communication Interfaces

daq software

When considering piezoelectric precious stone gadgets for use in a DAQ system, a great many people consider vibration and accelerometer sensors as these gems are the reason for the universal ICP/IEPE sensors. It is, for the most part, comprehended that when you apply a compel on a piezoelectric precious stone it makes the gem misshape somewhat and that this misshapen prompts a quantifiable voltage over the gem.
Another element of these precious stones is that a voltage set over an unstressed piezoelectric gem makes the gem "twist". This twisting is in reality little, additionally exceptionally all around acted and unsurprising. Piezoelectric precious stones have turned into an exceptionally normal movement control gadget in systems that require little redirections. Specifically, they are utilized as a part of a wide assortment of laser control systems and additionally a large group of other optical control applications. In such applications, a mirror is connected to the precious stone, and as the voltage connected to the gem is changed, the mirror moves. In spite of the fact that the development is normally not noticeable by the human eye, at the wavelength of light, the development is considerable. Driving these piezoelectric gadgets presents two intriguing difficulties.
To begin with, accomplishing the coveted development from a piezoelectric precious stone regularly requires huge voltages, however leniently at low DC streams. Second, however, the precious stones have high DC impedances they additionally have high capacitance, and driving them at high rates is not a minor undertaking.
Correspondences is an "oft overlooked" some portion of numerous data acquisition and control systems. Take note of that we're not discussing the interchanges interface between the I/O gadget and the host PC. We're alluding to different gadgets to/and from which we either need to obtain data or issue control summons. Cases of this sort of gadget may be the CAN-transport in a car or the ARINC-429 interface in either a business airship or ship.

Wednesday, 12 April 2017

Other types of DAQ Hardware - Part 3

daq

Output Drive

Make certain to research how much momentum is required by whatever gadget you are endeavoring to drive with the analog yield channel. Most D/A channels are restricted to under ±5 mA or ±10 mA max. A few merchants offer higher yield streams in standard yield modules (e.g., UEI's DNA-AO-308-350 which will drive ±50 mA). For higher yield still, it is frequently conceivable to include an outer cushion intensifier. Take note of that on the off chance that you are driving more than 10 mA, you will probably need to indicate a system with sense leads in the event that you have to keep up high system exactness.

Output Range 

Another genuinely evident thought, the yield run must be coordinated to your application prerequisite. Like their analog input kin, it is feasible for a D/A channel to drive a littler range than its maximum, however, there is a decrease of powerful resolution. Most analog yield modules are intended to drive ±10 V, however a few, similar to UEI's DNA-AO-308-350, will specifically drive yields up to ±40V. Higher voltages might be obliged with outside support gadgets. Obviously, at voltages more prominent than ±40V, wellbeing turns into a critical element. Be cautious — and if all else fails, contact a specialist who will help guarantee your system is sheltered. A last note with respect to expanding the yield scope of a D/A channel is that if the gadget being driven is either disengaged from the analog yield systems, or on the off chance that it utilizes differential inputs, it might be conceivable to twofold the successful yield run by utilizing two channels that drive their yields in inverse headings.

Output Update Rate 

In spite of the fact that numerous DAQ systems "set and overlook" the analog yield, numerous more require that they react to intermittent updates. In control systems, circle security or a prerequisite for control "smoothness" will regularly direct that yields be refreshed a specific number of times each second. Additionally, applications where the D/A's give a system excitation, a specific number of updates every second might be required. Check that the system you are thinking about is fit for giving the refresh rate required by your application. It is likewise a smart thought to incorporate somewhat cushion with this spec on the off chance that you find not far off you have to "turn" the yields somewhat speedier. 2.1.9 Output Slew Rate The second some portion of the yield "speed" determination, the large number rate, decides how rapidly the yield voltage changes once the D/A converter has been ordered to another esteem. Commonly indicated in volts per microsecond, if your system requires the yields to change and balance out rapidly, you will need to check your D/A yield slew rate.

Output Glitch Energy

As the yield changes starting with one level then onto the next, a "glitch" is made. Essentially, the glitch is an overshoot that consequently vanishes by means of hose wavering. In DC applications, the glitch is from time to time tricky, yet in the event that you are hoping to make a waveform with the analog yield, the glitch can be a noteworthy issue as it might produce significant commotion on any excitation inferred. Most D/A gadgets are intended to limit glitch, and it is conceivable to basically dispense with it in the D/A system, yet it additionally for all intents and purposes ensures that the yield slew rate will be reduced.

Tuesday, 11 April 2017

Other types of DAQ Hardware - Part 2

Monotonicity 

In spite of the fact that it's sound judgment to accept that on the off chance that you charge your yield to go to a higher voltage, it will, paying little respect to the general precision. In any case, this is not really the situation. D/A converters show an error called differential non-linearity (DNL). Generally, DNL error speaks to the variety in yield "step estimate" between adjoining codes. In a perfect world, instructing the yield to increment by 1LSB, would make the yield change by a sum equivalent to the general yield resolution. Notwithstanding, D/A converters are not immaculate and expanding the advanced code kept in touch with a D/A by one may make the yield change .5 LSB, 1.3 LSB, or some other subjective number. A D/A/channel is said to be monotonic if each time you increment the advanced code kept in touch with the D/A converter, the yield voltage does undoubtedly increment. In the event that the D/A converter DNL is under ±1 bit, the converter will be monotonic. If not, charging a higher yield voltage could in truth make the yield drop. In control applications, this can be extremely risky as it turns out to be hypothetically workable for the system to "bolt" onto a false set point, inaccessible from the one wanted. 2.1.5 Output Type Unlike analog inputs, which arrive in a bunch of sensor-particular input designs, analog yields ordinarily come in two flavors, voltage yield and current yield. Make certain to determine the correct sort of your system. A few gadgets offer a blend of voltage and current yields, however, most offer just a solitary sort. In the event that your system requires both, you might need to consider a present yield module, as the present yields can frequently be changed over to a reasonable voltage yield with the straightforward establishment of a shunt resistor. Take note of the exactness of the shunt resistor-made voltage yield is extremely subject to the precision of the resistor utilized. Additionally, take note of, the shunt resistor utilized will be in parallel with any heap or gadget associated with it. Make sure the input impedance of the gadget driven is sufficiently high not to influence the shunt work.

Monday, 10 April 2017

“Other” types of DAQ I/O Hardware - Part 1

daq
This article portrays the "other normal" sorts of DAQ I/O — gadgets, for example, Analog Outputs, Digital Inputs, Digital Inputs, Counter/Timers, and Special DAQ capacities, which covers such gadgets as Motion I/O, Synchro/Resolvers, LVDT/RVDTs, String Pots, Quadrature Encoders, and ICP/IEPE Piezoelectric Crystal Controllers. It likewise covers such themes as interchanges interfaces, timing, and synchronization capacities.
Analog Outputs Analog or D/A yields are utilized for an assortment of purposes in data acquisition and control systems. To appropriately coordinate the D/A gadget to your application, it is important to consider an assortment of determinations, which are recorded and clarified beneath.

Number of Channels 

As it's a genuinely clear necessity, we won't invest much energy in it. Ensure you have enough yields to take care of business. On the off chance that it's conceivable that your application might be extended or adjusted, later on, you may wish to determine a system with a couple "safe" yields. In any event, make certain you can add yields to the system not far off without significant trouble.
Resolution As with A/D channels, the resolution of a D/A yield is a key particular. The resolution depicts the number or scope of various conceivable yield states (regularly voltages or streams) the system is equipped for giving. This detail is all around given as far as "bits", where the resolution is characterized as 2(# of bits). For instance, 8-bit resolution relates to a resolution of one section in 28 or 256. So also, 16-bit resolution relates to one section in 216 or 65, 536. At the point when joined with the yield go, the resolution decides how little an adjustment in the yield might be summoned. To decide the resolution, essentially separate the full-scale scope of the yield by its resolution. A 16-bit yield with a 0-10 Volt full-scale yield gives 10 V/216 or 152.6 microvolts resolution. A 12-bit yield with a 4-20 mA full scale gives 16 mA/212 or 3.906 uA resolution.

Accuracy 

Despite the fact that precision is frequently compared to resolution, they are not the same. An analog yield with a one microvolt resolution doesn't really (or even regularly) mean the yield is precise to one microvolt resolution. Outside of sound yields, D/A system precision is commonly on the request of a couple LSBs. Be that as it may, it is critical to check this detail as not all analog yield systems are made equivalent. The most noteworthy and basic error commitments in analog yield systems are Offset, Gain/Reference, and Linearity errors.

Monday, 27 March 2017

Strain (& Stress) Measurements

data acquisition system
The Strain Gage (a.k.a. Strain Gage) is a standout amongst the most normally measured gadgets in data acquisition and DAQ systems. The strain is frequently measured as the real parameter of intrigue. On the off chance that the application is really keen on how much a protest grows, contracts, or winds, the coveted estimation is a strain. The strain is likewise as often as possible measured as a moderate intends to gauge stretch, where stress is the constraint required to actuate a strain. Maybe the most widely recognized cases of this deciphered estimation are load cells, where the strain of a known, all around described metallic bar is measured, however, the genuine yield scale element of the cell is in units of the drive (e.g. pounds or newtons). The anxiety/strain relationship is all around characterized in numerous materials in specific designs, making the change from strain into stress a straightforward numerical figuring. Making matters less demanding still is that for some materials, including for all intents and purposes all metals, the connection amongst anxiety when the anxiety is connected in unadulterated strain or pressure is direct. The linearity of the relationship is alluded to as Hooke's law, while the genuine coefficient that depicts the relationship is regularly alluded to as either the modulus of flexibility or Young's modulus. Regardless of whether stress or strain is the real estimation of intrigue, the mechanics of the strain gage and the hardware required to make the estimation are for all intents and purposes indistinguishable. To make a basic strain gage, you require just immovably join a length of wire to the protest being stressed. In the event that connected in accordance with the strain as the question protracts under pressure, the wire to is extended. As the wire length expands so does its resistance. Then again, if the stressed protest is compacted, the length of the wire diminishes, and there is a relating change in the wire's resistance. Measure the resistance change and you have a sign of the strain changes of your protest. Obviously, the scale figure expected to change over the resistance change into strain would need to be resolved somehow, and it would not be an unimportant procedure. Additionally, the resistance change for a little strain change would be minuscule, making the estimation a troublesome one.
Today's strain gage producers have settled both the scale figure and, to a specific degree, the size of resistance change issues. To expand the yield (resistance change) per unit of strain, today's strain gages are normally made by putting numerous "wires" in a crisscross arrangement
A strain gage with 10 zigs and 10 zags would adequately build the yield scale consider by an element of 20 over the single wire case. For a straightforward application, you should simply adjust the strain gage so the "long" components are in parallel to the bearing of strain to quantify, and fasten the gage with a suitable cement.
The strain gage makers likewise furnish gages with extremely exact scale variables. This permits clients to change over the resistance estimation into strain, with a straightforward, direct condition (excluding temperature impacts… more on this later). The scale component of a strain gage is alluded to as its Gage Factor, which relying upon the source is usually curtailed as GF, Fg, or even K.