Showing posts with label labview consulting. Show all posts
Showing posts with label labview consulting. Show all posts

Saturday, 16 December 2017

Semiconductor Testing


http://www.readydaq.com/content/blog/semiconductor-testing

Automated test equipment (ATE) is computer-controlled test and measurement equipment that allows for testing with minimal human interaction. The tested devices are referred to as a device under test (DUT). The advantages of this kind of testing include reducing testing time, repeatability, and cost efficiency in high volume. The chief disadvantages are the upfront costs of programming and setup.
Automated test equipment can test printed circuit boards, interconnections, and verifications. They are commonly used in wireless communication and radar. Simple ATEs include volt-ohm meters that measure resistance and voltages in PCs; complex ATE systems have several mechanisms that automatically run high-level electronic diagnostics.
ATE is used to quickly confirm whether a DUT works and to find defects. When the first out-of-tolerance value is detected, the testing stops and the device fails.

Semiconductor Testing

For ATEs that test semiconductors, the architecture consists of a master controller (a computer) that synchronizes one or more sources and capture instruments, such as an industrial PC or mass interconnect. The DUT is physically connected to the ATE by a machine called a handler, or prober, and through a customized Interface Test Adapter (ITA) that adapts the ATE's resources to the DUT.
When testing packaged parts or directly on the silicon wafer, a handler is used to place the device on a customized interface board and silicon wafers are tested directly with high precision probes.

Test Types

Logic Testing

Logic test systems are designed to test microprocessors, gate arrays, ASICs and other logic devices.
Linear or mixed signal equipment tests components such as analog-to-digital converters (ADCs), digital-to-analog converters (DACs), comparators, track-and-hold amplifiers, and video products. These components incorporate features such as, audio interfaces, signal processing functions, and high-speed transceivers.
Passive component ATEs test passive components including capacitors, resistors, inductors, etc. Typically, testing is done by the application of a test current.
Discrete ATEs test active components including transistors, diodes, MOSFETs, regulators, TRIACS, Zeners, SCRs, and JFETs.

Printed Circuit Board Testing

Printed circuit board testers include manufacturing defect analyzers, in-circuit testers, and functional analyzers.
Automated Test Equipment imageManufacturing defect analyzers (MDAs) detect manufacturing defects, such as shorts and missing components, but can't test digital ICs as they test with the DUT powered down (cold). As a result, they assume the ICs are functional. MDAs are much less expensive than other test options and are also referred to as analog circuit testers.
In-circuit analyzers test components that are part of a board assembly. The components under test are "in a circuit." The DUT is powered up (hot). In-circuit testers are very powerful but are limited due to the high density of tracks and components in most current designs. The pins for contact must be placed very accurately in order to make good contact. They are also referred to as digital circuit testers or ICT.
A functional test simulates an operating environment and tests a board against its functional specification. Functional automatic test equipment (FATE) unpopular due to the equipment not being able to keep up with the increasing speed of boards. This causes a lag between the board under test and the manufacturing process. There are several types of functional test equipment and they may also be referred to as emulators.

Interconnection and Verification Testing

Test types for interconnection and verification include cable and harness testers and bare-board testers.
Cable and harness testers are used to detect opens (missing connections), shorts (open connections) and miswires (wrong pins) on cable harnesses, distribution panels, wiring looms, flexible circuits, and membrane switch panels with commonly-used connector configurations. Other tests performed by automated test equipment include resistance and hipot tests.
Bare board automated test equipment is used to detect the completeness of a PCB circuit before assembly and wave solder.

Monday, 20 November 2017

9 Things to Consider When Choosing Automated Test Equipment


automation

Automated test equipment (ATE) have the ability to reduce the costs of testing and make sure that lab teams can focus on other, more important tasks. With ATE, productivity, and efficiency is boosted to a maximum level due to cutting out the unnecessary tasks and daily activities.
However, you should not just cash out and invest in automated test equipment, there are elements that factors that are important to find the system that suits you best. Our team at ReadyDAQ has prepared 12 things you should consider before choosing automated test equipment.

1. Endurance and Compactness

One of the most important things is that the ATE system your company picks is designed for optimal performance over the long-term. Take a careful look at connections and components and make a conclusion whether they will survive over repeated use.Many lab teams are often struggling to find large areas for their testing operations. The automated test equipment should also be compact.

2. Customer Experience

Are other customers satisfied the support and other things they went through? Does the company you bought ATE from provide full support? You don't have to be the expert in automated test equipment, but they do. And their skills and expertise have to be available to you for when you need it. Customer support and the overall customer experience is a huge factor!

3. Scalability and Compatibility

One purchase does not have to be final. It often isn't You should check whether the equipment you ordered can be expanded or scaled over time. Your needs might change and you want ATE to adapt to your needs.
When compatibility comes to mind, we want to make sure that the equipment is built following all industry standards. Cross-compatibility is often important in situations where we no longer need or have lost the access to certain products. Better safe than sorry.

4. Comprehensive

Think of all the elements needed for testing. Even better, make a list. Does the equipment you have in mind cover ALL required elements? Don't forget about power and signaling, are they included too?

5. High Test Coverage and Diagnostics 

The ATE system has to be able to provide full coverage and give insights on all components of the processed product. This can help decrease the number of possible errors and failures later on.
How about diagnostics? Does the testing system provide robust diagnostic tools to make sure the obtained results are reliable and true?

6. Cost per Test

How much does a single test cost? You have to think and plan long-term, so a single test cost can help you calculate and make an assumption whether the system provides real value for the money invested.

7. Testimonials and Warranty 

Are other customers satisfied? Can the company direct you to testimonials from previous customers? What do their previous customers have to say about the systems and their performance?
Also, you don't want to be left hanging in case the systems starts malfunctioning or simply stops working. Does the ATE system come with a comprehensive warranty? Make sure you’re protected against damages that might happen in testing and see that the warranty covers that too.

8. Manufacturer Reputation

When did you first hear about the company? How? Did someone (besides them) say anything good about them? Is the company known for the high quality of their equipment? Discuss their past projects and learn more about the value their products provide.

9. Intuitive Performance

At first sight, is the system easy to use or way too complicated that it would require weeks of training for everyone in the lab? Does it offer intuitive performance within the testing procedure? Your team should be able to begin testing without having to go over every point in the technical manual in pinpoint detail.
Our team at ReadyDAQ is here to help you select the perfect automated test equipment for your lab.

Tuesday, 29 August 2017

IoT: Security and Privacy


data logger

Two key IoT issues, which are also intertwined, are security and privacy: the data IoT devices store and work with needs to be safe from hackers, so as not to have sensitive data exposed to third parties. It is of utmost importance that IoT devices be secure from vulnerabilities and private so that users would feel safe in their surroundings and trust that their data shall not be exposed or worse, sold through illegal channels. Also, since devices are becoming more and more integrated into our everyday lives (many people store their credentials on their devices, for example), poorly secured devices can serve as entry points for cyber-attacks and leave data unprotected.
 
The nature of IoT devices means that every unsecured or inadequately secured devices pose a potential threat. This security problem is even deeper since various devices can connect to each other automatically, thus refraining the user from knowing at first glance whether a security issue exists. Therefore, developers and users of IoT devices have an obligation to make sure that no other devices come in any potential harm, so they constantly develop and test security solutions for these challenges.
 
The second key issue, privacy, is thought to be a factor which holds back the full development and implementation of IoT. Many users are concerned about their rights when it comes to their data being tracked, collected and analyzed. IoT also raises concerns regarding the potential threat of being tracked, the inability of discarding certain data collection, surveillance etc. Strategies need to be implemented which, although bring innovation, still respect user privacy choices and expectations. In order for Internet of Things to truly be accepted, these challenges need to be looked into and these problems need to be overcome, which is a great task and a test both for developers and for users.

Saturday, 26 August 2017

IoT: Summary

data logging
The Internet of Things (or shortened ‘IoT’) is a hot topic in today’s world which carries extraordinary significance in socio-economic and technical aspects of everyday life. Products designed for consumers, long-lasting goods, automobiles and other vehicles, sensors, utilities and other everyday objects are able to become connected among themselves through the Internet and strong data analytic capabilities and therefore transform our surroundings. Internet of Things is forecast to have an enormous impact on the economy; some analysts anticipate almost 100 billion interconnected IoT devices. On the other hand, other analysts proclaim that IoT devices shall input into the global economy more than $11 trillion by 2025.
However, the Internet of Things comes with many important challenges which, if not overcome, could diminish or even put a stop to the progress of it thus failing to realize all its potential advantages. One of the greatest challenges is security: the newspapers are filled with headlines alerting the public to the dangers of hacking internet-connected devices, identity theft and privacy intrusion. These technical and security challenges remain and are constantly changing and developing; at the same time, new legal policies are emerging.
This document’s purpose is to help the Internet Society community find their way in the discourse about the Internet of Things regarding its pitfalls, shortcomings and promises.
Many broad ideas and complex thoughts surround the Internet of Things and in order to find one’s way, the key concepts that should be looked into as they represent the foundation of circumstances and problems of IoT are:
- Transformational Potential: If IoT takes off, a potential outcome of it would be a ‘hyperconnected world’ where limitations on applications or services that use technology cease to exist.
- IoT Definitions: although there is not one universal definition, the term Internet of Things basically refers to several connected objects, sensors or items (not considered computers) which create, exchange and control data with next to none human intervention.
- Enabling Technologies: Cloud computing, data analytics, connectivity and networking all lead to the ability to combine and interconnect computers, sensors and networks all in order to control other devices.
- Connectivity Models: There are four common communication models and are as following: Device-to-Device, Device-to-Cloud, Device-to-Gateway, and finally Back-End Data-Sharing. These models show how flexible IoT devices can be when connecting and when providing value to their respective users.

Sunday, 20 August 2017

LabVIEW Projects you should Know


labview
STÄUBLI LABVIEW INTEGRATION LIBRARY
The DSM LabVIEW-Stäubli Control Library is created to simplify communications between a host PC running LabVIEW and a Stäubli robotic motion controller so as to control the robot from the LabVIEW environment. 
Stäubli Robots are usually found in the automation industry. The standard Staubli programming language, VAL3, is an adjustable language allowing for a wide variety of tasking. Although the VAL3 language works well in its environment, there are limited options for connecting the robot to an existing PC-based test & measurement system. The LabVIEW language, on the other hand, has been created from the start to run systems found in a research environment. The DSM LabVIEW-Staubli Integration Library lets the user promptly create applications for a Staubli robot using the familiar LabVIEW programming language.
 
 
AUTOMATED CRYOGENIC TEST STATION
A test station was built with the intent in mind to automate cyclic cryogenic exposure.  A LabVIEW program was inserted to automate the process and collect data. The software featured:
Checked the temperature of up to 8 thermocouples
Checked the life status of test specimens twice per cycle
Automated backups to allow for data recovery
System was integrated with a pneumatic control board and safety features
 
 
TENSILE TESTER CONTROL PROGRAM
This system is able to record high-resolution x-ray imagery of test subjects of aerospace alloys while they are under tensile and cyclic fatigue tests.  This capability can improve understanding of how grain refinement is used to enhance material properties.  The tensile tester can function in multiple modes of operation. The sample can be fully rotated within the tester, permitting three-dimensional imagery of samples.
 
DYNAMOMETER TEST STATION
A test station designed to characterize piezoelectric motors was built, with programmable current source and a DC motor integrated into the system to apply a range of resistive torque loads to the tested motor.  A torque load cell and a high-resolution encoder were used to measure torque and speed, which is collected at each resistive torque level, forming a torque curve. A LabVIEW project was programmed to run the test. Test settings were configured in the program and data was collected by an NI DAQ card. The program also included data manipulation and analysis.
 
OPEN-LOOP ACTUATOR CONTROLLER
The goal is to characterize the actuator's performance in open-loop so that a closed-loop control scheme can be developed. This program can output voltage waveforms as well as voltage steps up to 40V. Voltage duration is programmable down to the millisecond and an encoder is integrated into the system and readings are real time. The encoder features resolution on a micron level and experiences exceptional noise due to the vibrations present in the system. The data is filtered after the test to report accurate, low-noise data.

Thursday, 10 August 2017

About I²C

professional labview expert
I²C is a multi-master protocol that uses two signal lines. The two I²C signals are named ‘serial data’ (SDA) and ‘serial clock’ (SCL). There is no need of chip select (servant select) or compromise logic. Basically, any number of servants and any number of masters can be united onto these two signal lines and correspond to each other using a protocol that specifies:
 
7-bits servant addresses: every device united to the bus has got such a unique address;
certain control bits for governing the communication commence, end, and direct for any acknowledgement mechanism.
data are divided into 8-bit bytes
 
The data standard must be chosen betwixt 100 kbps, 400 kbps and 3.4 Mbps, accordingly called standard mode, a fast mode and high-speed mode. Some I²C variations contain 10 kbps and 1 Mbps as genuine speeds.
Physically, the I²C bus comprises the two active wires SDA and SCL and a ground connection. The effective wires are both bi-directional. The I2C protocol necessity states that the IC that begins a data transfer on the bus is treated the Bus Master. Therefore, at the time, all the other ICs were considered to be Bus Servants.
 
At electrical rank, there is literally no conflict at all if multiple instruments try to put any logic rank on the I²C bus lines together. If one of the drivers attempts to write a logical zero and the other a logical one, then the open-drain and pull-up arrangement ensure that there will be no shortcut and the bus will indeed see a logical zero transiting on the bus. In other words, in any conflict, a logic zero always ‘scores’.
Furthermore, the I²C protocol likewise helps at dealing with communication problems. Any apparatus present on the I²C listens to it permanently. Promising masters on the I²C encountering a START condition will wait until a STOP is encountered to attempt a new bus admission. Servants on the I²C bus will decode the device address that follows the START condition and checks if it doubles theirs. All the servants that are not addressed will wait until a STOP status is issued before listening repeatedly to the bus. Likewise, since the I²C protocol foresees active-low acknowledge bit after each byte, the master/servant couple can identify their counterpart presence. Ultimately, if anything else goes bad, this would signify that the apparatus ‘talking on the bus’ would know it by simply comparing what it sends with what is seen on the bus. If a difference is detected, a STOP case must be issued, which discharges the bus.

Tuesday, 8 August 2017

Basics and Applications of Optical Sensor

professional labview expert
An optical sensor is one that converts light rays into a computerized signal. To measure a physical quantity of light and, depending on the sort of sensor, translate it into a form that is readable by some unified measuring device is the purpose of an optical sensor. Optical sensors can be both external and internal. External sensors assemble and address an appropriate quantity of light, while internal sensors measure the bends and other small changes in direction.

Types of Optical Sensors

There are various kinds of optical sensors, and here are the most common types.

Through-Beam Sensors

The usual system consists of two independent components. The receiver and the transmitter are placed opposite to each other. That transmitter projects a light beam onto the receiver. A breach of the light beam is explained as a switch signal by the receiver. It is insignificant where the interruption appears.
Its advantage is that large operating distances can be attained and the recognition is separated from the object’s surface structure, colour or reflectivity.
It must be assured that the object is sufficiently huge to interrupt the light beam completely, to ensure a high operational dependability.

Diffuse Reflection Sensors

Both receiver and transmitter are in one housing. The transmitted light is reflected by the object that must be identified.
The diffused light intensity at the receiver serves as the switching condition. Regardless of the sensitivity setting the front part regularly reflects worse than the rear part and this leads to the after effect of false switching operations.

Retro-Reflective Sensors

Here, both transmitter and receiver are in the same house. Through a reflector, the radiated light beam is conducted back to the receiver. An interruption of the light beam commences a switching operation. It is not influential where the interruption occurs.
Retro-reflective sensors set up large operating distances with switching points, which are completely reproducible demanding little escalating effort. Any object interfering the light beam is precisely detected independently of its colour or surface structure.

Wednesday, 19 July 2017

6 Steps on How to Learn or Teach LabVIEW OOP - Part 2

Labview

Step 4 – Practice!
This stage is harder than the last. You need to make sure:
Each child class should exactly reflect the abstract methods. If your calling code ever cares which sub-class it is calling by using strange parameters or converting the type then you are violating LSP – the Liskov substitution principle – The L of solid.
Each child class should have something relevant to do in the abstract classes. If it has methods that make no sense this is a violation of the interface segregation principle.
Step 5 – Finish SOLID
Read about the open-closed principle and the dependency inversion principle and try it in a couple of sections of code.
Open-closed basically means that you leave interfaces (abstract classes in LabVIEW). Then you can change the behavior by creating a new child class (open for extension) without have to modify the original code (closed to modification). This goes well with the dependency inversion principle. This says that higher level classes should depend only on interfaces (again abstract classes). This means the lower level code implements these classes and so the high-level code can call the lower level code without a direct dependency.
This goes well with the dependency inversion principle. This says that higher level classes should depend only on interfaces (again abstract classes). This means the lower level code implements these classes and so the high-level code can call the lower level code without a direct dependency. This can help in places where coupling is difficult to design out.
I leave these principles to the end because I think they are the easiest to write difficult to read code. I’m still trying to find a balance with these – following them wholeheartedly creates lots of indirection which can affect readability. I also think we don’t get as much benefit in LabVIEW with these since we don’t tend to package code within projects in the same way as other languages. (this maybe a good topic for another post!)
Step 6 – Learn some design patterns
This was obviously part of the point of this article. When I came back to design patterns after understanding design better and the SOLID principles it allowed me to look at the patterns in a different way. I could relate them to the principles and I understood what problems they solved.
For example, the command pattern (where you effectively have a QMH which takes message classes) is a perfect example of a solution to meet the open-closed principle for an entire process. You can extend the message handler by adding support for new message types by creating new message classes instead of modifying the original code. This is how the actor framework works and has allowed the developers to create a framework where they have a reliable implementation of control of the actors but you can still add messages to define the functionality.
Once you understand why these design patterns exist you can then apply some critical thinking about why and when to use them. I personally dislike the command pattern in LabVIEW because I don’t think the additional overhead of a large number of message classes is worth the benefit of being able to add messages to a QMH without changing the original code.
I think this will help you to use them more effectively and are less likely to end up with a spaghetti of design patterns thrown together because that is what everyone was talking about.
Urmm… so what do I do?
So I know this doesn’t have the information you need to actually do this so much as set out a program. Actually, all the steps still follow the NI course on OOP so you could simply self-pace this for general learning material.

Friday, 14 July 2017

Project Management in Medical Industry


data acquisition


The medical industry has grown multifold over the last decade and the amount of innovation and development has proved the fact that with the growth of technology we can expect better and affordable solutions in the health sector with the help of innovative medical devices. For these major medical device companies, innovation leads to prototyping which is a major constituent while developing a medical device after a thorough research. It is very important to focus solely on the development of the prototype and something which hinders the process is the development of software to control the devices. For such project managers, ReadyDAQ is the one stop shop for application development needs as it offers customizability and flexibility to develop applications for your Data acquisition devices as per your needs and requirements.
 
The usual process of developing a prototype involves extensive research and study of the subject following which data needs to be acquired from sensors and operating devices such as pumps, motors, and drivers for the smooth functioning of the device. For this, a software needs to be developed individually for each concerned device but with ReadyDAQ you have the freedom to plug and play the devices without the need to develop an exclusive application from the scratch. 
Since research projects come with tight schedules and deadlines, it is the duty of the project manager to make sure that all the time and focus is being devoted to the development of the device making ReadyDAQ the perfect solution to this problem. 
ReadyDAQ supports multiple devices at a time making it easier to control, read, acquire and store signals. In the medical industry, it is very important to get precise readings for different values hence, it plays a major role in any medical innovation or RnD center where error free values are necessary to build the perfect prototype.
Built in the LabVIEW environment, this application is the perfect solution for all the project managers out there to save on time and expense. So, if you are looking to build the perfect prototype for the next big thing in the medical device industry, try ReadyDAQ. We offer a 7 day 100% money back guarantee.

Friday, 7 July 2017

Setting up LabVIEW Project

labview freelancer consultant
Complete the following steps to set up the LabVIEW project:
 
  1. Launch LabVIEW by selecting Start»All Programs»National Instruments»LabVIEW.
  2. Click the Empty Project link in the Getting Started window to display the Project Explorer window. You can also select File»New Project to display the Project Explorer window.
  3. Select Help and make sure that Show Context Help is checked. You can refer to the context help throughout this process for information about items in the Project Explorer window and in your VIs.
  4. Right-click the top-level Project item in the Project Explorer window and select New»Targets and Devices from the shortcut menu to display the Add Targets and Devices dialog box.
  5. Make sure that the Existing target or device radio button is selected.
  6. Expand Real-Time CompactRIO.
  7. Select the CompactRIO controller to add to the project and click OK.
  8. Select FPGA Interface from the Select Programming Mode dialog box to put the system into FPGA Interface programming mode.
  9. Tip Tip  Use the CompactRIO Chassis Properties dialog box to change the programming mode in an existing project. Right-click the CompactRIO chassis in the Project Explorer window and select Properties from the shortcut menu to display this dialog box.
  10. Click Discover in the Discover C Series Modules? dialog box if it appears.
  11. Click Continue.
  12. Drag and drop the C Series module(s) that will run in Scan Interface mode under the chassis item. Leave any modules you plan to write FPGA code for under the FPGA target.

Real-Time Processor

professional labview expert
An industrial 400 MHz Freescale MPC5200 processor that deterministically acquires one’s LabVIEW Real-Time applications on the reliable Wind River VxWorks real-time operating system features the CompactRIO installed the system. Built-in operations for transferring data between the real-time processor within the CompactRIO embedded system and the FPGA are available in LabVIEW. One can pick from more than 600 built-in LabVIEW functions to frame its multithreaded installed system for real-time analysis, control, data logging, and communication. To save on development time, one can likewise combine existing C/C++ code with LabVIEW Real-Time code.

Starting a New CompactRIO Project in LabVIEW

One should commence by creating a new project in LabVIEW, where one can manage some hardware resources and code.
1.       By selecting File » New Project, one creates a new project in LabVIEW.
2.        Right-click on the Project feature at the top of the tree and by selecting New » Targets and Devices, one adds existing CompactRIO system to the project.
3.      One can add offline systems, or discover, by this dialogue, systems on existing network. To enlarge the Real-Time CompactRIO folder, select existing system, and click OK. Note: LabVIEW might not find it on the network if the existing system is not listed. Ensure that the existing system is well configured with a valid IP address in Measurement & Automation Explorer. One can likewise select to manually enter the IP address if the existing system is on a remote subnet.

Select the Appropriate Programming Model

Two programming models are granted by LabVIEW for CompactRIO systems. If one has LabVIEW FPGA and LabVIEW Real-Time on the existing development CPU, the one can be incited to pick which programming model he/she would like to use. In the LabVIEW Project, the one can change this setting later, if needed.
Scan Interface (CompactRIO Scan Mode) option allows a person to programme the real-time processor of the already existing CompactRIO system on a computer, but not the FPGA. NI provides a pre-defined personality for the FPGA that regularly scans the I/O and allocates it in a memory map, in this mode, making it accessible to LabVIEW Real-Time. For applications that lack single-point access to I/O at rates of a few hundred hertz, CompactRIO Scan Mode is sufficient. If someone wants to learn more about scan mode, the one should read the “Using CompactRIO Scan Mode” with “NI LabVIEW” white paper and sight the benchmarks.
LabVIEW FPGA Interface option allows a person to unlock the true power of CompactRIO throughout customising the FPGA personality in addition to the programming of the real-time processor and accomplishing performance that would typically lack custom hardware. One can implement custom triggering and timing, off-load signal analysis and processing, create custom protocols, and access I/O at its maximum rate by using LabVIEW FPGA.
After that, one should select the appropriate programming model for the existing application.
Consequently, LabVIEW will then try to detect C Series I/O modules present in the existing system and automatically add them to the LabVIEW Project and the chassis. Note: If a person’s existing system was not discovered and one chooses to add it offline, one will need to add the chassis and C Series I/O manually. For scan mode and FPGA mode, The LabVIEW Help online discusses this operation.

Monday, 3 July 2017

CompactRIO Scan Mode Tutorial

Labview projects
This section will teach a person how to create a basic control application on CompactRIO using scan mode. One should see the LabVIEW FPGA Tutorial if the choice is to use the LabVIEW FPGA Interface. One should then have a new LabVIEW Project that consists of the existing CompactRIO system, including the controller, C Series I/O modules, and chassis. An NI 9211 Thermocouple input module will be used in this tutorial; nonetheless, for any analogue input module, the process can be followed.
1.       The project is saved by selecting File»Save and entering Basic control with scan mode. Click OK.
2.       This project will only consist of one VI, which is the LabVIEW Real-Time application that runs installed on the CompactRIO controller. Right-clicking on the CompactRIO real-time controller in the project and selecting New»VI saves the VI as RT.vi.This one is created by the VI.
3.       Three routines are included in the key operation of this application: start up, run, and shutdown. An effortless way to accomplish this order of operation is a flat sequence structure. Place with three frames on the existing RT.vi block diagram a flat sequence structure.
4.       Then, a timed loop to the Run frame of the sequence structure should be inserted. The capability to synchronise code to various time basis, including the NI Scan Engine that reads and writes scan mode I/O is provided by timed loops.
5.       If the timed loop is to be configured, one should double-click on the clock icon on the left input node.
6.       Now, select Synchronise to Scan Engine as the Loop Timing Source. Click OK. This will cause the code in the timed loop to execute once, instantly after each I/O scan, assuring that any I/O values used in this timed loop are the most recent ones.
7.      To run synchronised to the scan engine, the step before constructed the timed loop. Now, by right-clicking on the CompactRIO real-time controller in the LabVIEW Project and picking Properties, one should configure the rate of the scan engine itself.
8.       Then, choose Scan Engine from the categories on the left and enter 100ms as the Scan Period and all the I/O in the CompactRIO system to be updated every 100ms (10Hz). From this page, the Network Publishing Period can also be set, which regulates how often the I/O values are published to the network for remote monitoring and debugging. After that, click OK.
9.       Now that one has constructed the I/O scan rate, it is time to add the I/O reads to the existing application for control. One can simply drag and drop the I/O variables from the LabVIEW Project to the RT block diagram when using CompactRIO Scan Mode. Expand the CompactRIO real-time controller, chassis, and the I/O module the one would like to log. By clicking on it, select AI0, then drag and drop it into the timed loop on your RT.vi diagram.
10.   Now, in this project for speciality digital Pulse Width Modulated output, one should configure the digital module so the one can use a PWM signal to control the imaginary heater unit. Right click on the existing digital module in the project and select Properties, to do this. Select Specialty Digital Configuration and a Speciality Mode of Pulse-Width Modulation in the C Series Module Properties dialogue. Speciality Digital mode allows the existing module to perform to pattern based digital I/O at rates significantly faster than is available with the scan interface. Click OK and the existing module will now be in PWM mode.
11.   Then a person is ready to add the actual PWM output to the block diagram. To do so, widen the Mod2 object in the project and drag and drop the PWM0 item to the block diagram as it has been done with the AI0 I/O node in the previous step.
12.   After that, somebody will want to join the PID control logic to this program. Right click the block diagram to open the functions palette and click on the Search button in the top right of the palette, if one wants to do such a thing.
13.   Scan for PID and pick PID.vi in the Control Design and Simulation Palette and drag it to the actual block diagram of the timed loop and wire the PID VI.
14.   The set point input is not wired now. That is because it is best practice to keep user interface (UI) objects out of actual high priority control loop. If someone wants to interact with and adjust the actual set point at the run time, the one will want to create a control that can be interacted with in the lower priority loop. Also, if someone wants to create single process shared variables for I/O in the already existing high priority control loop, two controls in our application (set point and stop) are needed to create two new single process shared variables.

A single process is created and the variable is shared by right click on the actual RT CompactRIO Target in the LabVIEW Project and New >> Library should be selected. Rename the library into something perceptive like RTComm. Then, one should right click on the new library and select New>>Variable. That will open the Shared Variable Properties dialogue. The variable should be named SetPoint (for example, the name depends on person’s imagination) and “Single Process” should be selected for the variable type in the Variable Type drop down box. Finally, click on the RT FIFO option in the left-hand tree and click the Enable RT FIFO check box.
15.   In the library that has just been created, another single-process shared variable should be made. This variable is for the Stop control that is going to be created that will stop the program when it is needed. All the same settings as the previous Set Point variable except for the type this new variable should possess, and it should be Boolean.
16.   Next, some user interface should be created. Such a thing is done in Slide control, Waveform Chart, Numeric control, and Stop (Boolean) control.
17.   This program is supposed to be finished now by creating a secondary (non-timed) loop for the actual UI objects and finishing wiring the existing block diagram.
18.   Note the extension of I/O to the configuration and shutdown states to ensure that already existing I/O is in a known state when the program begins and ends. The basic control application should be ready to run.

Monday, 12 June 2017

I²C (INTER-INTEGRATED CIRCUIT)


I²C (Inter-Integrated Circuit), pronounced I-squared-C or I-two-C, is a multi-master, multi-slave, packet switched, single-ended, serial computer bus invented by Philips Semiconductor (now NXP Semiconductors). It is typically used for attaching lower-speed peripheral ICs to processors and microcontrollers in short-distance, intra-board communication. Alternatively, I²C is spelled I2C (pronounced I-two-C) or IIC (pronounced I-I-C).
Since October 10, 2006, no licensing fees are required to implement the I²C protocol. However, fees are still required to obtain I²C slave addresses allocated by NXP.
SMBus, defined by Intel in 1995, is a subset of I²C, defining a stricter usage. One purpose of SMBus is to promote robustness and interoperability. Accordingly, modern I²C systems incorporate some policies and rules from SMBus, sometimes supporting both I²C and SMBus, requiring only minimal reconfiguration either by commanding or output pin use.
I²C uses only two bidirectional open-drain lines, Serial Data Line (SDA) and Serial Clock Line (SCL), pulled up with resistors. Typical voltages used are +5 V or +3.3 V, although systems with other voltages are permitted.
The I²C reference design has a 7-bit or a 10-bit (depending on the device used) address space.Common I²C bus speeds are the 100 kbit/s standard mode and the 10 kbit/s low-speed mode, but arbitrarily low clock frequencies are also allowed. Recent revisions of I²C can host more nodes and run at faster speeds (400 kbit/s Fast mode, 1 Mbit/s Fast mode plus or Fm+, and 3.4 Mbit/s High-Speed mode). These speeds are more widely used on embedded systems than on PCs. There are also other features, such as 16-bit addressing.
Note the bit rates are quoted for the transactions between master and slave without clock stretching or other hardware overhead. Protocol overheads include a slave address and perhaps a register address within the slave device, as well as per-byte ACK/NACK bits. Thus the actual transfer rate of user data is lower than those peak bit rates alone would imply. For example, if each interaction with a slave inefficiently allows only 1 byte of data to be transferred, the data rate will be less than half the peak bit rate.
The maximal number of nodes is limited by the address space and also by the total bus capacitance of 400 pF, which restricts practical communication distances to a few meters. The relatively high impedance and low noise immunity require a common ground potential, which again restricts practical use to communication within the same PC board or a small system of boards.

Friday, 9 June 2017

SERIAL COMMUNICATIONS - RS-485

daq
RS-485, also known as TIA-485(-A), EIA-485, is a standard defining the electrical characteristics of drivers and receivers for use in serial communications systems. Electrical signaling is balanced, and multipoint systems are supported. The standard is jointly published by the Telecommunications Industry Association and Electronic Industries Alliance (TIA/EIA). Digital communications networks implementing the standard can be used effectively over long distances and in electrically noisy environments. Multiple receivers may be connected to such a network in a linear, multi-drop configuration. These characteristics make such networks useful in industrial environments and similar applications.
The EIA once labeled all its standards with the prefix "RS" (Recommended Standard), but the EIA-TIA officially replaced "RS" with "EIA/TIA" to help identify the origin of its standards. The EIA has officially disbanded, and the standard is now maintained by the TIA. The RS-485 standard is superseded by TIA-485, but often engineers and applications guides continue to use the RS-485 designation.
RS-485 supports inexpensive local networks and multidrop communications links, using the same differential balanced line over twisted pair as RS-422. It is generally accepted that RS-485 can be used with data rates up to 10 Mbit/s and distances up to 1,200 m (4,000 ft), but not at the same time. A rule of thumb is that the speed in bit/s multiplied by the length in meters should not exceed 108. Thus a 50-meter cable should not signal faster than 2 Mbit/s. Under some conditions, it can be used up to data transmission speeds of 64 Mbit/s.
In contrast to RS-422, which has a single driver circuit which cannot be switched off, RS-485 drivers use three-state logic allowing individual transmitters to be deactivated. This allows RS-485 to implement linear bus topologies using only two wires. The equipment located along a set of RS-485 wires are interchangeably called nodes, stations or devices. The recommended arrangement of the wires is a connected series of point-to-point (multi-dropped) nodes, i.e. a line or bus, not a star, ring, or multiple connected networks. Star and ring topologies are not recommended because of signal reflections or excessively low or high termination impedance. If a star configuration is unavoidable, special RS-485 star/hub repeaters are available which bidirectionally listen for data on each span and then retransmit the data onto all other spans.
Ideally, the two ends of the cable will have a termination resistor connected across the two wires. Without termination resistors, reflections of fast driver edges can cause data corruption. Termination resistors also reduce electrical noise sensitivity due to the lower impedance. The value of each termination resistor should be equal to the cable characteristic impedance (typically, 120 ohms for twisted pairs). Somewhere along the set of wires, pull up or pull down resistors are established to fail-safe bias each data wire when the lines are not being driven by any device. This way, the lines will be biased to known voltages and nodes will not interpret the noise from undriven lines as actual data; without biasing resistors, the data lines float in such a way that electrical noise sensitivity is greatest when all device stations are silent or unpowered

Tuesday, 30 May 2017

Introduction to RS232 Serial Communication - Part 1

Labview consultant
Serial communication is basically the transmission or reception of data one bit at a time. Today’s computers generally address data in bytes or some multiple thereof. A byte contains 8 bits. A bit is basically either a logical 1 or zero. Every character on this page is actually expressed internally as one byte. The serial port is used to convert each byte to a stream of ones and zeroes as well as to convert streams of ones and zeroes to bytes. The serial port contains an electronic chip called Universal Asynchronous Receiver/Transmitter (UART) that actually does the conversion.
The serial port has many pins. We will discuss the transmit and receive pin first. Electrically speaking, whenever the serial port sends a logical one (1) a negative voltage is effected on the transmit pin. Whenever the serial port sends a logical zero (0) a positive voltage is effected. When no data is being sent, the serial port’s transmit pin’s voltage is negative (1) and is said to be in the MARK state. Note that the serial port can also be forced to keep the transmit pin at a positive voltage (0) and is said to be the SPACE or BREAK state. (The terms MARK and SPACE are also used to simply denote a negative voltage (1) or a positive voltage(0) at the transmit pin respectively).
When transmitting a byte, the UART (serial port) first sends a START BIT which is a positive voltage (0), followed by the data (general 8 bits, but could be 5, 6, 7, or 8 bits) followed by one or two STOP BITs which is a negative(1) voltage. The sequence is repeated for each byte sent.
At this point, you may want to know what is the duration of a bit. In other words, how long does the signal stay in a particular state to define a bit? The answer is simple. It is dependent on the baud rate. The baud rate is the number of times the signal can switch states in one second. Therefore, if the line is operating at 9600 baud, the line can switch states 9,600 times per second. This means each bit has the duration of 1/9600 of a second or about 100 µsec.
When transmitting a character there are other characteristics other than the baud rate that must be known or that must be setup. These characteristics define the entire interpretation of the data stream.
The first characteristic is the length of the byte that will be transmitted. This length, in general, can be anywhere from 5 to 8 bits.
The second characteristic is parity. The parity characteristic can be even, odd, mark, space, or none. If even parity, then the last data bit transmitted will be a logical 1 if the data transmitted had an even amount of 0 bits. If odd parity, then the last data bit transmitted will be a logical 1 if the data transmitted had an odd amount of 0 bits. If MARK parity, then the last transmitted data bit will always be a logical 1. If SPACE parity, then the last transmitted data bit will always be a logical 0. If no parity then there is no parity bit transmitted.
A third characteristic is a number of stop bits. This value, in general, is 1 or 2.
Stay tuned for part two, it will be published soon.

Friday, 26 May 2017

Computerized Outputs

data logger
Digital Outputs require a similar investigation and large portions of indistinguishable contemplation from advanced data sources. These incorporate watchful thought of yield voltage go, greatest refresh rate, and most extreme drive current required. In any case, the yields likewise have various particular contemplations, as portrayed beneath. Relays have the benefit of high off impedance, low off spillage, low on resistance, irresoluteness amongst AC and DC flags, and implicit segregation. Be that as it may, they are mechanical gadgets and consequently give bring down unwavering quality and commonly slower reaction rates. Semi-conductor yields regularly have a favorable position in speed and unwavering quality.
Semiconductor changes additionally have a tendency to be littler than their mechanical reciprocals, so a semiconductor-based advanced yield gadget will commonly give more yields per unit volume. When utilizing DC semiconductor gadgets, be mindful so as to consider whether your framework requires the yield to sink or source current. To fulfill varying necessities,

Current Limiting/Fusing

Most yields, and especially those used to switch high streams (100 mA or something like that), offer some kind of yield security. There are three sorts most normally accessible. The first is a straightforward circuit. Cheap and dependable, the primary issue with circuits, is they can't be reset and should be supplanted when blown. The second sort of current constraining is given by a resettable breaker. Ordinarily, these gadgets are variable resistors. Once the current achieves a specific edge, their resistance starts to rise rapidly, at last constraining the current and stopping the current.
Once the culpable association is evacuated, the resettable circuit returns to its unique low impedance state. The third kind of limiter is a real current screen that turns the yield off if and when an overcurrent is recognized. This "controller" limiter has the upsides of not requiring substitution taking after an overcurrent occasion. Numerous usage of the controller setup additionally permits the overcurrent outing to be determined to a channel by channel premise, even with a solitary yield board.


Friday, 5 May 2017

What are String Pots?

Data acquisition system
String pots are intended to gauge direct uprooting. They are ordinarily lower cost than LVDTs and can offer any longer estimation separations. As their name suggests, the reason for string pot is a string or link, and a potentiometer. Fundamentally, a string and a spring are connected to the wiper screw of the potentiometer and as the string is pulled, the potentiometer resistance changes. The string pot gives an adjustment element that depicts what uprooting is spoken to by a rate of resistance change.
As a basic variable resistance gadget, with a direct yield, most string pots are interfaced to standard A/D sheets. The most widely recognized association arrangement interfaces a voltage reference to the one side of the string pot with the opposite side associated with ground.
The "wiper" is then associated with an A/D input channel. With the string totally withdrawn, the deliberate voltage will be equivalent to either reference voltage or zero. With the string totally developed, the voltage measured will be the inverse (either zero or the reference voltage). At any middle of the road string expansion, the voltage measured will be corresponding to the rate of string "out".
Make sure your voltage reference has the yield current ability to drive the string pot resistance. Your estimation will be a blunder by an indistinguishable rate from any voltage reference mistake. At times, it might be advantageous to drive the string pot with a higher limit, bring down exactness voltage source. Should you require higher exactness than the voltage source gives, you may dependably devote an A/D channel to quantify the voltage source.
This makes the framework for all intents and purposes safe to blunders in the voltage source. Another note is that string pots are single finished, disengaged gadgets. While associating a string pot to a differential information, make sure to interface the string pot/reference ground and the A/D channel's low or "- " input. Neglecting to make this association somehow will probably bring about problematic and even "odd" conduct as the information "- " terminal buoys all through the info enhancer's normal mode go.

Monday, 10 April 2017

“Other” types of DAQ I/O Hardware - Part 1

daq
This article portrays the "other normal" sorts of DAQ I/O — gadgets, for example, Analog Outputs, Digital Inputs, Digital Inputs, Counter/Timers, and Special DAQ capacities, which covers such gadgets as Motion I/O, Synchro/Resolvers, LVDT/RVDTs, String Pots, Quadrature Encoders, and ICP/IEPE Piezoelectric Crystal Controllers. It likewise covers such themes as interchanges interfaces, timing, and synchronization capacities.
Analog Outputs Analog or D/A yields are utilized for an assortment of purposes in data acquisition and control systems. To appropriately coordinate the D/A gadget to your application, it is important to consider an assortment of determinations, which are recorded and clarified beneath.

Number of Channels 

As it's a genuinely clear necessity, we won't invest much energy in it. Ensure you have enough yields to take care of business. On the off chance that it's conceivable that your application might be extended or adjusted, later on, you may wish to determine a system with a couple "safe" yields. In any event, make certain you can add yields to the system not far off without significant trouble.
Resolution As with A/D channels, the resolution of a D/A yield is a key particular. The resolution depicts the number or scope of various conceivable yield states (regularly voltages or streams) the system is equipped for giving. This detail is all around given as far as "bits", where the resolution is characterized as 2(# of bits). For instance, 8-bit resolution relates to a resolution of one section in 28 or 256. So also, 16-bit resolution relates to one section in 216 or 65, 536. At the point when joined with the yield go, the resolution decides how little an adjustment in the yield might be summoned. To decide the resolution, essentially separate the full-scale scope of the yield by its resolution. A 16-bit yield with a 0-10 Volt full-scale yield gives 10 V/216 or 152.6 microvolts resolution. A 12-bit yield with a 4-20 mA full scale gives 16 mA/212 or 3.906 uA resolution.

Accuracy 

Despite the fact that precision is frequently compared to resolution, they are not the same. An analog yield with a one microvolt resolution doesn't really (or even regularly) mean the yield is precise to one microvolt resolution. Outside of sound yields, D/A system precision is commonly on the request of a couple LSBs. Be that as it may, it is critical to check this detail as not all analog yield systems are made equivalent. The most noteworthy and basic error commitments in analog yield systems are Offset, Gain/Reference, and Linearity errors.

Tuesday, 21 March 2017

Be Careful With Registrations

Labview projects
We found a memory development in their application which utilized client occasions for interprocess correspondence. The issue we found was that any client occasions which are enrolled however unhandled by an occasion structure will expand your application's memory use when produced.
A fundamentally the same as the issue was raised at the 2011 CLA summit that produced CAR 288741 (settled for LabVIEW 2013). This CAR was recorded in light of the fact that unhandled enlisted occasions really reset the timeout in occasion structures. There was a great deal of good dialog over at LAVA with clients estimating approaches to utilize this new component however what I didn't see raised anytime was the way that producing client occasions which are not taken care of in an occasion structure will bring about a memory development in your application notwithstanding resetting the occasion timeout.
From my understanding, we see this conduct on the grounds that the enlist occasions hub will make a post box for occasions to be placed in but since there is not a case in the occasion structure to deal with this particular occasion, it is never removed from the letter box. This will prompt an expansion in the application's memory each time that occasion is produced. I have backpedaled and forward between this being normal conduct and a bug. At the season of composing this I trust it not out of the ordinary conduct yet there are sure things that are either inconsistencies in LabVIEW or demonstrate my misconception of how LabVIEW occasions function.
One of these irregularities and a reason this issue can be so hard to find is the way unhandled occasions are shown in the Event Inspector Window.
The issue I have is that albeit "Some Event" is not dealt with in the occasion structure, it doesn't appear in the rundown of Unhandled Events in Event Queue(s). Curiously, the occasion shows up in the occasion log with the occasion kind of "Client Event (unhandled)" which implies LabVIEW knows the occasion is not taken care of in this specific occurrence but rather still keeps it in the post box. What is confounding, to me in any event, is that despite the fact that nothing appears in the occasion monitor's rundown of unhandled occasions, flushing the occasion line discards these occasions (additionally counteracting memory development).

Monday, 20 February 2017

Common Problems with LabVIEW Real-time Module: Part 2

labview expert
The second part of our series will address the difficulty with setting up a connection with a Compact Field Point and SQL Server 2000.
Let’s set up a possible scenario:
You have a SQL server with which you would like to communicate with directly (preferably no software in between).).
There is more than two way to try and solve this problem, but we’ve narrowed them down to two that are most likely to be a perfect solution:

1.  FTP files using cFP onto IIS FTP server (push data, then DTS).

This should be fairly easy to accomplish. As an alternative, you can write a LabVIEW app for your host computer (SQL Server Computer) that uses the Internet Toolkit to FTP the files off the cFP module, and writes the data from the file into the SQL Server using the SQL Toolkit. As another alternative, you can use DataSockets to read the file via FTP, parse it, and write the data to the SQL Server using the SQL Toolkit.

2. Write a custom driver/protocol (which will run on the cFP)

You can accomplish this, however, it is a subject to some limitations/difficulties One approach would be a modification of the first solution, where you create a host-side LabVIEW program that communicates with the cFP controller via a custom TCP protocol that you implement to retrieve data at specified intervals and log the data directly to the database.
How do you like solutions are LabVIEW experts are providing? Are you working on a LabVIEW project at the moment? Let us know in the comments.