GauchoAI team takes second place in Amazon's Alexa Prize SimBot ...



signal processing in artificial intelligence :: Article Creator

AI: Is The Intelligence Artificial Or Amplified?

Mark Heymann, Managing Partner. Mark Heymann & Assoc. HFTP Hall of Fame; BA Economics Brown Univ, MS Business, Columbia Univ.

getty

In today's environment, there's barely a day that goes by when there isn't some discussion or article written about the latest in artificial intelligence. It's a very exciting time as we look at what computers can accomplish with or without human intervention.

To take a half a step back and to even the playing field in order to ensure clarity in the discussion that's going to ensue, I will highlight four key areas of what is called artificial intelligence.

• Machine Learning: This is a simple process by which a system gains more information that enables it to parse data. Based on all of this historical information, it makes predictions about what is going to happen in the future.

• Deep Learning: This refers to a machine learning approach that utilizes artificial neural networks, employing multiple layers of processing to progressively extract more advanced features from data.

• Natural Language Processing: Natural language processing (NLP) employs machine learning techniques to unveil the underlying structure and significance within textual content. Through NLP applications, businesses can analyze text data and gain insights about individuals, locations and events, enabling a deeper comprehension of social media sentiment and customer interactions.

• Cognitive Computing: Cognitive computing pertains to technology frameworks that, in a general sense, draw from the scientific domains of artificial intelligence and signal processing. These frameworks encompass a range of technologies, including machine learning, logical reasoning, natural language processing, speech recognition, visual object recognition, human-computer interaction, as well as dialog and narrative generation, among other capabilities. There is currently no agreed-on definition for cognitive computing in the industry or academia.

Computers And Decision Making

My intent here is not to rehash a group of definitions, but with this as a baseline, I want to specifically turn to decision making and how much involvement computers should have in this process.

I think one of the keys to where the final decision lies depends upon not just the impact of a decision on the business but also the risk profile of the decision's outcome. Further, when that decision is assessed and reviewed, who will be held accountable for the result? This does not seem to be an area that discussions of artificial intelligence focus on very much.

Years ago—literally over 40 years ago—we developed some initial technology to help hotels predict revenue center activity. These centers not only accounted for daily room occupancy but also factored in the anticipated number of guests to other facilities, such as restaurants and bars. This process resembled the familiar task of forecasting widget production to align with demand while avoiding any significant inventory excesses.

The approach at that time was what we now commonly call machine learning. Over time, these technologies and algorithms have evolved to now fall more into the category of deep learning. But at the end of the day, regardless of any computer-generated predictions, it was still up to the manager of the specific revenue center or production environment to make the final decision on projected volume.

Once that decision was made, one of the key areas influenced by these projections was staffing levels. This pertained not only to daily staffing but, in the service industry, often extended to staffing levels in half-hour increments as needed.

As systems have advanced and the scope of data analysis has expanded, the accuracy of predictions has consistently improved. However, it remains a rarity for the manager overseeing this specific aspect of the operation to be fully removed from the final predictions, which encompass staffing and cost levels that will be incurred.

Where Human Intervention Is Needed

Turning now to the broader economy and taking a look at where AI is being tried, we see examples where the systems that are being used have no human intervention whatsoever. At times, it is clear that human intervention is absolutely needed.

Consider, for example, trading systems within the stock market. In such systems, human intervention has proven critical in preventing excessively wide market fluctuations. This is just one area, but I'm sure if you take a moment to sit back and think about other areas where computers are making decisions based on some level of AI, you'll find many more examples of where human intervention is still crucial.

The Business Impact Of Decisions

As we look at the application of what is broadly called artificial intelligence, it becomes more and more important to understand the risk impact of specific decisions on business results. Simply put, the larger the impact of a decision on an operation, the more important it is to ensure that the decision is not left completely to the computer.

If the decision going to be made has a very low risk of business failure and/or the cost of failure is very low, then it's easy to turn to the computer for determination.

We all remember when Deep Blue played chess and, at first, suffered defeat. However, as it continued to learn, it won chess matches, sparking our excitement about the computer's capabilities. Nevertheless, it's important to recognize that winning a chess game, which holds little real-world consequence, is quite different from the task of making decisions such as estimating the demand for breakfast service or predicting the number of travelers heading to Chicago.

The cost of getting that number wrong or the impact on other revenue centers can be significant, counting both direct and indirect impacts.

Therefore, I believe it benefits us to understand the consequences of the decisions being made, as well as the associated costs and risks of potential failures. This understanding can guide us in determining the appropriate level of management involvement in making the final decision. Final accountability for decision making in key areas needs to remain with management, especially when the cost of failure is high.

Over time, computer information and interpretation will become more important and enlightening. But as we look for accountability in management decisions, we may want to think more about AI being defined as "amplified" intelligence as compared to purely "artificial."

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Stethoscopes, Electronics, And Artificial Intelligence

For all the advances in medical diagnostics made over the last two centuries of modern medicine, from the ability to peer deep inside the body with the help of superconducting magnets to harnessing the power of molecular biology, it seems strange that the enduring symbol of the medical profession is something as simple as the stethoscope. Hardly a medical examination goes by without the frigid kiss of a stethoscope against one's chest, while we search the practitioner's face for a telltale frown revealing something wrong from deep inside us.

The stethoscope has changed little since its invention and yet remains a valuable if problematic diagnostic tool. Efforts have been made to solve these problems over the years, but only with relatively recent advances in digital signal processing (DSP), microelectromechanical systems (MEMS), and artificial intelligence has any real progress been made. This leaves so-called smart stethoscopes poised to make a real difference in diagnostics, especially in the developing world and under austere or emergency situations.

The Art of Auscultation

Since its earliest appearance in 1816 as an impromptu paper cone rolled by Dr. René Laennec, partly to protect the modesty of his female patients but also to make it possible to listen to the heart sounds of an obese woman, designs for stethoscopes have come to the familiar consensus configuration: a chest piece, a pair of earpieces, and a tube or tubes connecting them. The chest piece consists of either a broad, thin diaphragm or an open-ended bell, while the earpieces are designed to fit snugly into the practitioner's ears under light spring pressure to occlude as much environmental noise as possible.

State of the art – a Littmann Cardiology IV stethoscope. Source: 3M Littmann

The chest piece is pressed directly against the patient's body and acts as an impedance matcher that couples faint vibrations from the watery interior to the air outside. The column of air in the tubing connected to the chest piece conducts vibrations up to the earpieces and onto the eardrums of the practitioner. Medically, the process is referred to as auscultation, and though the sounds thus heard are faint and often coupled with noise both from inside and outside the patient, with practice they can help paint a surprisingly complete diagnostic picture.

While auscultation is used for sounds generated all over the body, it's mostly used for sounds made in the chest by the heart and by the lungs. The choice of which chest piece to use depends on which the frequency of the sounds that need to be heard. Lung sounds are generally of higher pitch, in the range of 200 to 2,000 Hertz, as air vibrates over and around the structures of the respiratory tract. The diaphragm chest piece is usually used for these sounds, with the thin, taut disc collecting sound over a wide area and coupling as much acoustic energy as possible up to the earpieces. Heart sounds are generally lower pitched, between 20 and 200 Hertz, and are caused by the turbulence of blood flowing through vessels and the mechanical sounds of the heart's valves. Such sounds are best heard using the bell chest piece, which essentially uses the patient's skin as a diaphragm. Many stethoscopes have dual-headed chest pieces that can be swiveled between diaphragm and bell.

No matter which chest piece is used, auscultation is more of an art than a science. While it's an extremely sensitive instrument, a stethoscope is fairly broadband and couples every noise and vibration up the pipe and into the ear. A human body can be a noisy place, with bowel sounds (or borborygmi, perhaps the best word in the English language) interfering with lung sounds or the patient's nervous chatter overriding a faint heart murmur. Add to that the tendency of a chest, which is essentially a big, hollow drum, to couple and resonate environmental noises, and picking a weak signal from broadband noise can be a challenging task.

Canceling the Noise

It seems like the stethoscope is a perfect target for electronic improvements — add a microphone to the chest piece, wire it to a small amplifier, and use earbuds instead of earpieces. Of course that's been tried many times since the dawn of the transistor age and the small, powerful amplifiers it made possible.

So why haven't doctors been sporting electronic stethoscopes for years? One reason is tradition: few tools are as strongly associated with a profession as the stethoscope, and to some degree, the instrument represents the trust that ideally exists between practitioner and patient. There are practical reasons for this apparent stasis, too. Chief among these is the stethoscope's simplicity — no batteries required, fits in a pocket, and is ready to go as soon as the earpieces are placed. It doesn't need to boot up, it doesn't suffer from electrical interference, it has few parts to wear out or break down, and is easily cleaned and disinfected. Plus it's cheap, a decent scope can be had for a couple of dollars.

Current version of the smart stethoscope. Source: Zebadiah Potler/Johns Hopkins University

But time and technology may finally have caught up with the old acoustic stethoscope, with perhaps the promise of improving patient care and saving lives. Johns Hopkins University recently announced a new "smart stethoscope" that completely replaces the acoustic system while maintaining the same form-factor as a traditional stethoscope. The smart stethoscope has an electronic chest piece using MEMS transducers to pick up body sounds, with sensitive, compact amplifiers driving small speakers in the earpieces.

If that were all the Johns Hopkins invention offered, it wouldn't be much of an advancement. However, the Hopkins team also attacked the biggest problems with auscultation: noise. By incorporating some of the same technology that noise-canceling headphones employ, the smart stethoscope uses an additional microphone to pick up environmental noise and filter it out algorithmically. A research paper (PDF link) discusses the details; the video below shows a prototype at work filtering out the cries of children in a pediatric clinic coupling through the patient's chest cavity. Notice too that the patient's heart sounds are eliminated, allowing lung sounds to come through clearly.

The implications of an effective, affordable noise-canceling stethoscope can't be overstated. Worldwide, very few medical interventions occur in a quiet setting. Crowded clinics, busy emergency rooms, active battlefields, and the backs of ambulances are where medicine is practiced, and where decisions are made based on what is heard in a stethoscope, or what is missed. Being able to filter out that noise and bring the signal to the front will make auscultation less art and more science, and hopefully improve outcomes.

What's more, since the Hopkins smart stethoscope is a digital instrument, it can serve as input for an artificial intelligence application that automatically analyzes the data for signs of pathology. The system has been trained to identify the characteristic but oftentimes subtle sounds of pneumonia, a leading killer of children worldwide, using data from 1,500 patients. The app currently discriminates between healthy patients and those with pneumonia 87 percent of the time; apps for other diseases are currently being developed.

The Hopkins smart stethoscope has a lot of potential, especially if the cost per unit can be kept low. Most electronic stethoscopes currently run to $500 a piece or more, making access to them difficult in the developing world where they can arguably do the most good. It will be difficult to match the price point or simplicity of the tried and true acoustic instruments, but it seems like the next generation of electronic stethoscopes might just be a better deal in terms of outcomes.


Artificial Intelligence And Machine Learning Based Image Processing

By V Srinivas Durga Prasad, Softnautics

Image processing is the process of converting an image to a digital format and then performing various operations on it to gather useful information. Artificial Intelligence (AI) and Machine Learning (ML) has had a huge influence on various fields of technology in recent years. Computer vision, the ability for computers to understand images and videos on their own, is one of the top trends in this industry. The popularity of computer vision is growing like never before and its application is spanning across industries like automobiles, consumer electronics, retail, manufacturing and many more. Image processing can be done in two ways: Physical photographs, printouts, and other hard copies of images being processed using analogue image processing and digital image processing is the use of computer algorithms to manipulate digital images. The input in both cases is an image. The output of analogue image processing is always an image. However, the output of digital image processing may be an image or information associated with that image, such as data on features, attributes, and bounding boxes. According to a report published by Data Bridge Market Research analyses, the Image processing systems market is expected to grow at a CAGR of 21.8% registering a market value of USD 151,632.6 million by 2029. Image processing is used in a variety of use cases today, including visualisation, pattern recognition, segmentation, image information extraction, classification, and many others.

Image processing working mechanism

Artificial intelligence and Machine Learning algorithms usually use a workflow to learn from data. Consider a generic model of a working algorithm for an Image Processing use case. To start, AI algorithms require a large amount of high-quality data to learn and predict highly accurate results. As a result, we must ensure that the images are well-processed, annotated, and generic for AIML image processing. This is where computer vision (CV) comes in; it is a field concerned with machines understanding image data. We can use CV to process, load, transform, and manipulate images to create an ideal dataset for the AI algorithm.

Let's understand the workflow of a basic image processing system

An Overview of Image Processing System

Acquisition of image

The initial level begins with image pre-processing which uses a sensor to capture the image and transform it into a usable format.

Enhancement of image

Image enhancement is the technique of bringing out and emphasising specific interesting characteristics which are hidden in an image.

Restoration of image

Image restoration is the process of enhancing an image's look. Picture restoration, as opposed to image augmentation, is carried out utilising specific mathematical or probabilistic models.

Colour image processing

A variety of digital colour modelling approaches such as HSI (Hue-Saturation-Intensity), CMY (Cyan-Magenta-Yellow) and RGB (Red-Green-Blue) etc. Are used in colour picture processing.

Compression and decompression of image

This enables adjustments to image resolution and size, whether for image reduction or restoration depending on the situation, without lowering image quality below a desirable level. Lossy and lossless compression techniques are the two main types of image file compression which are being employed in this stage.

Morphological processing

Digital images are processed depending on their shapes using an image processing technique known as morphological operations. The operations depend on the pixel values rather than their numerical values, and well suited for the processing of binary images. It aids in removing imperfections for structure of the image.

Segmentation, representation and description

The segmentation process divides a picture into segments, and each segment is represented and described in such a way that it can be processed further by a computer. The image's quality and regional characteristics are covered by representation. The description's job is to extract quantitative data that helps distinguish one class of items from another.

Recognition of image

A label is given to an object through recognition based on its description. Some of the often-employed algorithms in the process of recognising images include the Scale-invariant Feature Transform (SIFT), the Speeded Up Robust Features (SURF), and the PCA (Principal Component Analysis).

Frameworks for AI image processing

OpenCV is a well-known computer vision library that provides numerous algorithms and utilities to support the algorithms. The modules for object detection, machine learning, and image processing are only a few of the many that it includes. With the help of this programme, you may do picture processing tasks like data extraction, restoration, and compression.

TensorFlow, created by Google, is one of the most well-known end-to-end machine learning programming frameworks for tackling the challenges of building and training a neural network to automatically locate and categorise images to a level of human perception. It offers functionalities like work on multiple parallel processors, cross platform, GPU configuration, support for a wide range of neural network algorithms, etc.

Intended to shorten the time it takes to get from a research prototype to commercial development, it includes features like a tool and library ecosystem, support for popular cloud platforms, a simple transition from development to production, distribution training, etc.

It is a deep learning framework intended for image classification and segmentation. It has features like simple CPU and GPU switching, optimised model definition and configuration, computation utilising blobs, etc.

Applications

The ability of a computer to comprehend the world is known as machine vision. Digital signal processing and analogue-to-digital conversion are combined with one or more video cameras. The image data is transmitted to a robot controller or computer. This technology aids companies in improving automated processes through automated analysis. For instance, specialised machine vision image processing methods can frequently sort parts more efficiently when tactile methods are insufficient for robotic systems to sort through various shapes and sizes of parts. These methods use very specific algorithms that consider the parameters of the colours or greyscale values in the image to accurately define outlines or sizing for an object.

The technique of identifying patterns with the aid of a machine learning system is called pattern recognition. The classification of data generally takes place based on previously acquired knowledge or statistical data extrapolated from patterns and/or their representation. Image processing is used in pattern recognition to identify the items in an image, and machine learning is then used to train the system to recognise changes in patterns. Pattern recognition is utilised in computer assisted diagnosis, handwriting recognition, image identification, character recognition etc.

A video is nothing more than just a series of images that move quickly. The number of frames or photos per minute and the calibre of each frame employed determine the video's quality. Noise reduction, detail improvement, motion detection, frame rate conversion, aspect ratio conversion, colour space conversion, etc. Are all aspects of video processing. Televisions, VCRs, DVD players, video codecs, and other devices all use video processing techniques.

  • Transmission and encoding
  • Today, thanks to technological advancements, we can instantly view live CCTV footage or video feeds from anywhere in the world. This indicates that image transmission and encoding have both advanced significantly. Progressive image transmission is a technique of encoding and decoding digital information representing an image in a way that the image's main features, like outlines, can be presented at low resolution initially and then refined to greater resolutions. An image is encoded by an electronic analogue to multiple scans of the exact image at different resolutions in progressive transmission. Progressive image decoding results in a preliminary approximate reconstruction of the image, followed by successively better images whose adherence is gradually built up from succeeding scan results at the receiver side. Additionally, image compression reduces the amount of data needed to describe a digital image by eliminating extra data, ensuring that the image processing is finished and that it is suitable for transmission.

  • Image sharpening and restoration
  • Here, the terms "image sharpening" and "restoration" refer to the processes used to enhance or edit photographs taken with a modern camera to produce desired results. Zooming, blurring, sharpening, converting from grayscale to colour, identifying edges and vice versa, image retrieval, and image recognition are included. Recovering lost resolution and reducing noise are the goals of picture restoration techniques. Either the frequency domain or the image domain is used for image processing techniques. Deconvolution, which is carried out in the frequency domain, is the easiest and most used technique for image restoration.

    Image processing can be employed to enhance an image's quality, remove unwanted artefacts from an image, or even create new images completely from scratch. Nowadays, image processing is one of the fastest-growing technologies, and it has a huge potential for future wide adoption in areas such as video and 3D graphics, statistical image processing, recognising, and tracking people and things, diagnosing medical conditions, PCB inspection, robotic guidance and control, and automatic driving in all modes of transportation.

    At Softnautics, we help industries to design Vision based AI solutions such as image classification & tagging, visual content analysis, object tracking, identification, anomaly detection, face detection and pattern recognition. Our team of experts have experience in developing vision solutions based on Optical Character Recognition, NLP, Text Analytics, Cognitive Computing, etc. Involving various FPGA platforms.

    Author: V Srinivas Durga Prasad

    Srinivas is a Marketing professional at Softnautics working on techno-commercial write-ups, marketing research and trend analysis. He is a marketing enthusiast with 7+ years of experience belonging to diversified industries. He loves to travel and is fond of adventures.






    Comments

    Follow It

    Popular posts from this blog

    Dark Web ChatGPT' - Is your data safe? - PC Guide

    Christopher Wylie: we need to regulate artificial intelligence before it ...

    Top AI Interview Questions and Answers for 2024 | Artificial Intelligence Interview Questions