Connect with us



Deep neural networks have gained fame for their capability to process visual information. And in the past few years, they have become a key component of many computer vision applications.

Among the key problems neural networks can solve is detecting and localizing objects in images. Object detection is used in many different domains, including autonomous driving, video surveillance, and healthcare.

In this post, I will briefly review the deep learning architectures that help computers detect objects.

Convolutional neural networks

One of the key components of most deep learning–based computer vision applications is the convolutional neural network (CNN). Invented in the 1980s by deep learning pioneer Yann LeCun, CNNs are a type of neural network that is efficient at capturing patterns in multidimensional spaces. This makes CNNs especially good for images, though they are used to process other types of data too. (To focus on visual data, we’ll consider our convolutional neural networks to be two-dimensional in this article.)

Every convolutional neural network is composed of one or several convolutional layers, a software component that extracts meaningful values from the input image. And every convolution layer is composed of several filters, square matrices that slide across the image and register the weighted sum of pixel values at different locations. Each filter has different values and extracts different features from the input image. The output of a convolution layer is a set of “feature maps.”

When stacked on top of each other, convolutional layers can detect a hierarchy of visual patterns. For instance, the lower layers will produce feature maps for vertical and horizontal edges, corners, and other simple patterns. The next layers can detect more complex patterns such as grids and circles. As you move deeper into the network, the layers will detect complicated objects such as cars, houses, trees, and people.

Each layer of the neural network encodes specific features from the input image.

Most convolutional neural networks use pooling layers to gradually reduce the size of their feature maps and keep the most prominent parts. Max-pooling, which is currently the main type of pooling layer used in CNNs, keeps the maximum value in a patch of pixels. For example, if you use a pooling layer with a size 2, it will take 2×2-pixel patches from the feature maps produced by the preceding layer and keep the highest value. This operation halves the size of the maps and keeps the most relevant features. Pooling layers enable CNNs to generalize their capabilities and be less sensitive to the displacement of objects across images.

Finally, the output of the convolution layers is flattened into a single dimension matrix that is the numerical representation of the features contained in the image. That matrix is then fed into a series of “fully connected” layers of artificial neurons that map the features to the kind of output expected from the network.

Architecture of convolutional neural network (CNN)

The most basic task for convolutional neural networks is image classification, in which the network takes an image as input and returns a list of values that represent the probability that the image belongs to one of several classes.

For example, say you want to train a neural network to detect all 1,000 classes of objects contained in the popular open-source dataset ImageNet. In that case, your output layer will have 1,000 numerical outputs, each of which contains the probability of the image belonging to one of those classes.

You can always create and test your own convolutional neural network from scratch. But most machine learning researchers and developers use one of several tried and tested convolutional neural networks such as AlexNet, VGG16, and ResNet-50.

Object detection datasets

Object-detection networks need to be trained on precisely annotated images.

While an image classification network can tell whether an image contains a certain object or not, it won’t say where in the image the object is located. Object detection networks provide both the class of objects contained in an image and a bounding box that provides the coordinates of that object.

Object detection networks bear much resemblance to image classification networks and use convolution layers to detect visual features. In fact, most object detection networks use an image classification CNN and repurpose it for object detection.

Object detection is a supervised machine learning problem, which means you must train your models on labeled examples. Each image in the training dataset must be accompanied with a file that includes the boundaries and classes of the objects it contains. There are several open-source tools that create object detection annotations.

Example of an annotation file for object detection training data.

The object detection network is trained on the annotated data until it can find regions in images that correspond to each kind of object.

Now let’s look at a few object-detection neural network architectures.

The R-CNN deep learning model

R-CNN architecture.

The Region-based Convolutional Neural Network (R-CNN) was proposed by AI researchers at the University of California, Berkley, in 2014. The R-CNN is composed of three key components.

First, a region selector uses “selective search,” algorithm that find regions of pixels in the image that might represent objects, also called “regions of interest” (RoI). The region selector generates around 2,000 regions of interest for each image.

Next, the RoIs are warped into a predefined size and passed on to a convolutional neural network. The CNN processes every region separately extracts the features through a series of convolution operations. The CNN uses fully connected layers to encode the feature maps into a single-dimensional vector of numerical values.

Finally, a classifier machine learning model maps the encoded features obtained from the CNN to the output classes. The classifier has a separate output class for “background,” which corresponds to anything that isn’t an object.

Object detection with R-CNN.

The original R-CNN paper suggests the AlexNet convolutional neural network for feature extraction and a support vector machine (SVM) for classification. But in the years since the paper was published, researchers have used newer network architectures and classification models to improve the performance of R-CNN.

R-CNN suffers from a few problems. First, the model must generate and crop 2,000 separate regions for each image, which can take quite a while. Second, the model must compute the features for each of the 2,000 regions separately. This amounts to a lot of calculations and slows down the process, making R-CNN unsuitable for real-time object detection. And finally, the model is composed of three separate components, which makes it hard to integrate computations and improve speed.

Fast R-CNN

Fast R-CNN architecture.

In 2015, the lead author of the R-CNN paper proposed a new architecture called Fast R-CNN, which solved some of the problems of its predecessor. Fast R-CNN brings feature extraction and region selection into a single machine learning model.

Fast R-CNN receives an image and a set of RoIs and returns a list of bounding boxes and classes of the objects detected in the image.

One of the key innovations in Fast R-CNN was the “RoI pooling layer,” an operation that takes CNN feature maps and regions of interest for an image and provides the corresponding features for each region. This allowed Fast R-CNN to extract features for all the regions of interest in the image in a single pass as opposed to R-CNN, which processed each region separately. This resulted in a significant boost in speed.

However, one issue remained unsolved. Fast R-CNN still required the regions of the image to be extracted and provided as input to the model. Fast R-CNN was still not ready for real-time object detection.

Faster R-CNN

[faster r-cnn architecture]

Faster R-CNN architecture.

Faster R-CNN, introduced in 2016, solves the final piece of the object-detection puzzle by integrating the region extraction mechanism into the object detection network.

Faster R-CNN takes an image as input and returns a list of object classes and their corresponding bounding boxes.

The architecture of Faster R-CNN is largely similar to that of Fast R-CNN. Its main innovation is the “region proposal network” (RPN), a component that takes the feature maps produced by a convolutional neural network and proposes a set of bounding boxes where objects might be located. The proposed regions are then passed to the RoI pooling layer. The rest of the process is similar to Fast R-CNN.

By integrating region detection into the main neural network architecture, Faster R-CNN achieves near-real-time object detection speed.


YOLO architecture.

In 2016, researchers at Washington University, Allen Institute for AI, and Facebook AI Research proposed “You Only Look Once” (YOLO), a family of neural networks that improved the speed and accuracy of object detection with deep learning.

The main improvement in YOLO is the integration of the entire object detection and classification process in a single network. Instead of extracting features and regions separately, YOLO performs everything in a single pass through a single network, hence the name “You Only Look Once.”

YOLO can perform object detection at video streaming frame rates and is suitable applications that require real-time inference.

In the past few years, deep learning object detection has come a long way, evolving from a patchwork of different components to a single neural network that works efficiently. Today, many applications use object-detection networks as one of their main components. It’s in your phone, computer, car, camera, and more. It will be interesting (and perhaps creepy) to see what can be achieved with increasingly advanced neural networks.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.


Naabiae Nenu-B is a Medical Health Student and an SEO Specialist dedicated to flushing the web off fake news and scam scandals. He aims at being "Africa's Best Leak and Review Blogger" and that's the unwavering stand of Xycinews Media.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.


This Samsung Galaxy Flip 3 price leak is almost too good to be true



Samsung will unveil its 2021 foldables next week during an unprecedented event. For the first time since it introduced the Galaxy Note series, Samsung will not launch a new model this year. Instead, the Galaxy Z Fold 3 and Flip 3 will be the main stars of the August 11th Unpacked event. Both foldables will deliver better hardware than their predecessors, and they should be significantly more durable than the other Fold and Flip phones. The new handsets might also be more affordable. The Flip 3 price will certainly impress if a new leak is accurate. The Fold 3 will have a higher cost, as expected for the larger foldable handset that’s the apparent Note replacement. But Samsung already has a few new deals in place that might lower your cost significantly.

A new leak comes from Europe and seemingly confirms a previous rumor. The Flip 3 might have a very attractive price this year, while the Fold 3 will be the most expensive new Samsung foldable.

Today’s Top Deal Unreal deal gets you Amazon’s hottest smart home gadget for $23 – plus a $40 credit! List Price:$29.98 Price:$21.99 You Save:$7.99 (27%) Buy Now Available from Amazon, BGR may receive a commission Available from Amazon BGR may receive a commission

A few days ago, an online store listed the purported Fold 3 and Flip 3 prices for the European Union region. The leak indicated the 256GB Galaxy Z Fold 3 would cost €2,029 (VAT included). The 128GB Galaxy Z Flip’s price stood at €1,182 (VAT included). There’s a reason we’re not turning those prices into dollars. A direct conversion would not give us the US price. That’s because EU countries include VAT in their pricing structures.

Instead, a comparison to the Fold 2 and original Flip prices would make more sense. The Fold 2 was priced at around €2,020 in Europe at launch, and $2,000 in the US. The Flip price was €1,359 at launch last year — $1,380 in the US.

Flip 3 price better than Galaxy S21 Ultra?

If the previous Fold 3 and Flip 3 price leak is accurate, then the Flip 3 will get a significant discount. The Fold 3, not so much. This brings us to the newest Fold 3 and Flip 3 price leak from Europe.

A leaker claims the 256GB Fold 3 will cost €2,009, with the 512GB model priced at €2,099. If the figures are accurate, then the Fold 3 will have almost the same price as the Fold 2. But the real deal might be the 512GB Fold 3 here. Only €90 for 256GB of extra storage sounds too good to be true.

The 128GB Flip 3 will cost €1,029, according to the same leaker. Paying €90 more would get you twice the storage — 256GB for €1,109. If accurate, the Flip 3 price is quite impressive. It puts the foldable in line with the latest iPhone Pro models, which start at €1,149 (VAT included) in Europe.

To put things into perspective, the 128GB Galaxy S21 Ultra cost €1,279 (VAT included) at launch in Europe, and $1,199 in the US.

Again, converting the prices into dollars will not make sense for either the Fold 3 or Flip 3. But comparing these price points to the Fold 2 and Flip launch prices in Europe suggests the Flip 3 will get quite a big price cut this year.

Today’s Top Deal Unreal deal gets you Amazon’s hottest smart home gadget for $23 – plus a $40 credit! List Price:$29.98 Price:$21.99 You Save:$7.99 (27%) Buy Now Available from Amazon, BGR may receive a commission Available from Amazon BGR may receive a commission


Continue Reading


Wyze now makes Wyze Smart Bulbs and they’re only $9 each



If you purchase an independently reviewed product or service through a link on
our website, we may receive an affiliate commission.

BGR Deals readers have come to know the brand Wyze quite well over the past few years. Its claim to fame is the $26 Wyze Cam home security camera that has been purchased by millions of people. This great little wireless home camera has all the main features you’ll find in rival cameras. But the Wyze Cam is literally less than half the price. Now, in addition to its super-popular cameras, the company has entered the smart lighting space with Smart Wyze Bulbs.

These are beautiful and bright smart LED light bulbs that support Alexa, Google Assistant, and more. And as you might expect from Wyze, they’re remarkably affordable.

Wyze Labs WLPA19-4 Smart Wyze Bulb, 4-Pack, White Price:$36.92 ($9.23 / Count) Buy Now Available from Amazon, BGR may receive a commission

Unless this is your first time reading our site, you’ve undoubtedly heard of the Wyze Cam before. This great gadget packs nearly all the same features you might find in a $100 home security camera. But the Wyze Cam sold for just $26. That’s a big part of the reason why it had about 78,000 5-star ratings on Amazon.

Now, the original Wyze Cam has been discontinued. The hot new model is called the Wyze Cam v3. It has awesome new features like color night vision, and it’s still remarkably affordable at Amazon. But Wyze isn’t just a camera company anymore. Now, Wyze has used the same formula to set its sights on the smart lighting market.

Wyze Smart Bulbs aren’t expensive!

Wyze Bulbs are LED smart light bulbs that offer many of the same features you would find in a bulb from a leading brand that costs $20 or $25. You get a nice bright 800-lumen bulb with adjustable color temperature, Alexa support, Google Assistant support, and more. You won’t need a hub to use these smart bulbs since they connect directly to your home Wi-Fi network, so that’s an added bonus. But the best part is the price tag, as you might have surmised.

You can pick up Smart Wyze Bulbs bulbs for around $15 each at Amazon. But who uses just one smart light bulb? Definitely pick up a 4-pack instead so you can cut your cost per bulb to just over $9 apiece! Also of note, you can get Wyze Labs Color Smart Bulbs for $13.49 apiece.

Wyze Labs WLPA19-4 Smart Wyze Bulb, 4-Pack, White Price:$36.92 ($9.23 / Count) Buy Now Available from Amazon, BGR may receive a commission
Wyze Labs WLPA19C2PK Smart Wyze Bulb, 2-Pack, Color Price:$26.98 ($13.49 / Count) Buy Now Available from Amazon, BGR may receive a commission

Here are the key takeaways:

  • Smart Wyze Bulbs let you set any mood with tunable white color temperature
  • Hues range from warm white (2700k) to daylight (6500k)
  • This 60-watt equivalent smart LED bulb uses just 9.5 watts of power
  • The dimmable bulb produces up to 800 lumens of brightness
  • Control Smart Wyze Bulbs with your smartphone or voice commands
  • Use the app or your voice to dim these bulbs; not compatible with dimmer switches
  • These smart bulbs connect directly to your home Wi-Fi network, so you don’t need a hub
  • Use the free Wyze app for fast and easy setup
  • Pair your bulbs with Wyze Sense contact and motion sensors for great automations
  • You can also link your Wyze Bulbs to IFTTT for use with other smart home platforms
  • Set schedules to make it look like someone is always home
  • The group feature lets you control multiple bulbs ar once
  • Wyze Bulbs fade on and off so they’re not harsh on your eyes
  • Works with Alexa and Google Assistant

Wyze Labs WLPA19-4 Smart Wyze Bulb, 4-Pack, White Price:$36.92 ($9.23 / Count) Buy Now Available from Amazon, BGR may receive a commission Wyze Labs WLPA19C2PK Smart Wyze Bulb, 2-Pack, Color Price:$26.98 ($13.49 / Count) Buy Now Available from Amazon, BGR may receive a commission

Go here to see this month’s best deals on Amazon!

Follow @BGRDeals on Twitter to keep up with the latest and greatest deals we find around the web. Prices subject to change without notice and any coupons mentioned above may be available in limited supply.


Continue Reading


Amazon’s ‘Lord of the Rings’ release date: The most expensive TV show ever made arrives Sept. 2022



For the longest time, Amazon’s strategy for its Netflix-like Prime Video service has been a little all over the place. In the early days, it had a kind of art theater, off-the-beaten-path vibe to its projects. Some of them, admittedly, delivered runaway success for the company, like Transparent and more recently The Boys. But there were also too many “What were they thinking?” kinds of projects, and others that were rushed, weren’t marketed well or didn’t realize their full potential (think, Man in the High Castle and Mozart in the Jungle) along the way. This brings us to Amazon’s much-anticipated Lord of the Rings series release date, which the company on Monday finally confirmed.

Today’s Top Deal Unreal deal gets you Amazon’s hottest smart home gadget for $23 – plus a $40 credit! List Price:$29.98 Price:$21.99 You Save:$7.99 (27%) Buy Now Available from Amazon, BGR may receive a commission Available from Amazon BGR may receive a commission

Amazon’s Lord of the Rings series — coming in 2022

Once the show debuts (on September 2, 2022, almost 21 years after the original movie premiered in theaters), it will mark the culmination of a huge strategy shift at Amazon Prime Video.

The company’s now-ex CEO but current chairman, Jeff Bezos, swooped in and gave the service a simple command a few years ago. The streamer has too many oddball TV shows and series, he decided. And what it needs, very urgently, is to land a blockbuster. Specifically, it needs to find its own Game of Thrones-sized hit.

Pretty much every streamer, by the way, has been chasing that ghost, in the wake of the staggering impact of HBO’s landmark fantasy series. But in a sign of how serious Amazon is about actually realizing this ambition, its series — set within the beloved literary universe of J.R.R. Tolkien — had a $450 million budget. Making this, for those of you keeping score, the most expensive TV show ever made. How’s that for a Game of Thrones-ian, fantastical TV universe conjured up from the pages of a classic book series?

Key details

Everyone involved in this Lord of the Rings series, by the way, has kept key details about the show pretty close to the vest.

Production reportedly wrapped in New Zealand on Monday. The first-look photo that Amazon released above is from the premiere, but it’s not known who that image depicts. It’s also not known which characters from the Tolkien book series will be appearing herein. However, we do know that the show will be given a weekly release cadence for new episodes.

From the company’s announcement Monday:

“The new epic drama brings to screens for the very first time J.R.R. Tolkien’s fabled Second Age of Middle-earth’s history. Beginning in a time of relative peace, thousands of years before the events of The Hobbit and The Lord of the Rings books, the series follows an ensemble cast of characters, both familiar and new, as they confront the long-feared re-emergence of evil to Middle-earth.”

Today’s Top Deal Unreal deal gets you Amazon’s hottest smart home gadget for $23 – plus a $40 credit! List Price:$29.98 Price:$21.99 You Save:$7.99 (27%) Buy Now Available from Amazon, BGR may receive a commission Available from Amazon BGR may receive a commission


Continue Reading


  • bitcoinBitcoin (BTC) $ 38,567.00
  • ethereumEthereum (ETH) $ 2,523.21
  • tetherTether (USDT) $ 1.00
  • binance-coinBinance Coin (BNB) $ 323.70
  • cardanoCardano (ADA) $ 1.36
  • xrpXRP (XRP) $ 0.719425
  • usd-coinUSD Coin (USDC) $ 1.00
  • dogecoinDogecoin (DOGE) $ 0.198530
  • polkadotPolkadot (DOT) $ 17.51
  • binance-usdBinance USD (BUSD) $ 1.00
  • uniswapUniswap (UNI) $ 21.48
  • chainlinkChainlink (LINK) $ 23.26
  • bitcoin-cashBitcoin Cash (BCH) $ 536.19
  • solanaSolana (SOL) $ 35.12
  • litecoinLitecoin (LTC) $ 139.45
  • wrapped-bitcoinWrapped Bitcoin (WBTC) $ 38,608.00
  • matic-networkPolygon (MATIC) $ 1.03
  • stellarStellar (XLM) $ 0.274244
  • ethereum-classicEthereum Classic (ETC) $ 49.93
  • theta-tokenTheta Network (THETA) $ 5.68
  • terra-lunaTerra (LUNA) $ 13.52
  • vechainVeChain (VET) $ 0.086225
  • daiDai (DAI) $ 1.00
  • internet-computerInternet Computer (ICP) $ 39.25
  • filecoinFilecoin (FIL) $ 54.61
  • compound-usd-coincUSDC (CUSDC) $ 0.022145
  • okbOKB (OKB) $ 17.84
  • tronTRON (TRX) $ 0.064540
  • moneroMonero (XMR) $ 231.11
  • cdaicDAI (CDAI) $ 0.021617
  • aaveAave (AAVE) $ 313.54
  • eosEOS (EOS) $ 4.03
  • ftx-tokenFTX Token (FTT) $ 36.55
  • compound-ethercETH (CETH) $ 50.55
  • cosmosCosmos (ATOM) $ 12.26
  • pancakeswap-tokenPancakeSwap (CAKE) $ 16.26
  • the-graphThe Graph (GRT) $ 0.661768
  • shiba-inuShiba Inu (SHIB) $ 0.000006
  • Coin (CRO) $ 0.120630
  • amp-tokenAmp (AMP) $ 0.062174
  • neoNEO (NEO) $ 41.87
  • leo-tokenLEO Token (LEO) $ 3.02
  • bitcoin-svBitcoin SV (BSV) $ 141.50
  • algorandAlgorand (ALGO) $ 0.829601
  • klay-tokenKlaytn (KLAY) $ 1.03
  • tezosTezos (XTZ) $ 3.05
  • makerMaker (MKR) $ 2,818.82
  • iotaIOTA (MIOTA) $ 0.877667
  • celsius-degree-tokenCelsius Network (CEL) $ 5.78
  • avalanche-2Avalanche (AVAX) $ 12.76
error: Content is protected !!