Connect with us


Why This Smart Home Solutions Provider Isn’t Worried About a Silicon Shortage – ReadWrite



Deanna Ritchie

The world is in the grips of a major silicon-components shortage, and it’s affecting production in nearly every sector. At the start of the pandemic, demand for components soared along with the demand for computers, servers, and gaming consoles.

Social distancing at factories and chip-hoarding among tech giants has led to the most serious “chip-famine” in recent memory. The shortage will undoubtedly affect the price and availability of electronic devices through 2021 — but smart home experience pioneer Plume isn’t worried.

You may know Plume as the creator of the world’s first self-optimizing WiFi, but the company is more than a WiFi solutions provider. Lately, Plume has been on the front lines of bringing order and integration to the smart home. Its open-source software has paved the way for an industry-standard that could provide some relief during the shortage.

Most remarkably, Plume is finally delivering on the original vision for the smart home — complete interoperability.

Interoperability is talking about the ability of a system such as a computer system or a type of software to be able to use information-interoperability between devices that are made by different manufacturers.

OpenSync Could be the Missing Link for the Connected Home

In 2018, Plume and Samsung, among other industry players, announced OpenSync — the first multi-industry open-source framework — designed to connect in-home hardware to the cloud.

OpenSync is a cloud-agnostic, CPE-agnostic, and silicon-agnostic layer of software that operates across WiFi-enabled devices. It allows Internet service providers to deliver, manage, and support residential cloud services for their customers. The cloud-agnostic application allows workloads to be moved seamlessly between cloud platforms and other infrastructures without problems with operating dependency.

Licensing of the Open Source Framework

OpenSync is available under a BSD-3 open-source license. The BSD-3 is the Modified BSD License (3-clause) — which is in the family of permissive free software licenses. This license is compatible with RDK, OpenWRT, and prplWRT. It’s also integrated into the SDKs and designs from the industry’s leading silicon providers.

While OpenSync may create some more competition for Plume, it will ultimately safeguard the company’s future through versatility, compatibility, and its ability to scale.

According to Plume CEO Fahri Diner, consumer needs are evolving. We’ve gone from simply needing Internet connectivity to craving faster speeds for entertainment and socialization. Now consumers are looking for personalized cross-device experiences.

Connectivity to Support All Services

Connectivity to the home and services in the home are starting to decouple, says Diner. Companies such as Amazon, Apple, and Google have bypassed Internet providers to offer services directly to the consumer. To stay competitive, providers have realized they must be able to curate, deliver, and support these services.

Liberty Global, Bell Canada, and Comcast have already joined the OpenSync initiative. And why not? It makes it incredibly easy for Communications Service Providers (CSPs) to swiftly deliver the services their customers want. It’s also a natural fit for Plume, who has always sought to partner with—rather than compete against—providers.

How OpenSync Enables Smart Home 2.0

OpenSync doesn’t just help usher service providers into the smart home era. In a big-picture sense, creating a common software layer allows Plume to fulfill the original promise of the smart home. Every device and every service can be fully integrated and controlled from the cloud.

“Consumers today demand choice when bringing products and services into their home that work best for their lifestyle, without being locked into any one ecosystem,” says Samsung Vice President Chanwoo Park.

The Consumer Wants Easy Install and Interoperability for All Smart Home Devices

A smart home user might have an Amazon Echo, a Nest camera, a Samsung smart refrigerator, and Apple TV. Ease and convenience go out the window the minute those things don’t play well together.

With OpenSync, consumers can equip their homes with smart networking gear from many different suppliers regardless of their CSP (service provider). Consumers don’t want to have to stick to a single brand of networking devices. These issues will become especially relevant as we see tech companies try to exert more control over the smart home ecosystem.

Demand for the New “Works-With” Generation of Products

Last May, for instance, Google ended “Works With Nest” and transitioned to “Works With Google Assistant.” In a nutshell, Google didn’t want users controlling Nest products through third-party smart home applications. Google also makes its own proprietary WiFi system.

Imagine if the “works-with” generation of products is extended to “Works With Google WiFi” and made it more difficult for consumers with Google WiFi to utilize IoT devices from other brands?

Open-industry standards supplied by multiple vendors, such as OpenSync, are the best way to fight any attempts at monopolizing the smart home through proprietary WiFi networking.

The Beauty of Plume’s Hardware-Agnostic Solutions

Plume seems to have anticipated that major brands and service providers would soon be vying for consumers’ complete devotion. The company cleverly side-stepped this problem by making its smart home solutions hardware-agnostic. Plume’s adaptive WiFi works seamlessly with a customer’s existing CSP and any OpenSync-enabled hardware.

Plume has always been hardware-agnostic, putting the software required to connect a networking device to the cloud onto any brand of device, with any type of chipset inside. Plume took this to the next level by open-sourcing that software with its partners in 2018.

Total Hardware-Agnostic Capabilities

Working toward total hardware-agnostic capabilities allows not just Plume, but anyone, to put the software on any device with any chipset. Flexibility and availability have been a win for Plume and its partners. Wireless customers can get all the frills of adaptive WiFi and other smart home services without locking themselves into a particular hardware supplier or chipset vendor.

OpenSync gave Plume an opportunity to make inroads with every major provider because it solved a growing problem CSPs had. Their customers were rapidly adopting smart home tech — along with a layer of new services.

Until OpenSync came along, providers didn’t have a good solution to help their customers manage this. With OpenSync, the Internet of Things suddenly just works.

Why OpenSync is a Shock-Absorber to the Silicon Crisis

An open-source framework could prove to be even more valuable as the silicon shortage becomes dire. Plume has forged partnerships with major chipset vendors, and the top 20 leading WiFi CPEs are supported by OpenSync.

All OpenSync-powered CPEs can coexist on the same network, even devices of different WiFi generations, each utilized optimally in a mixed network. This extends the lifecycle of hardware that would otherwise be rendered obsolete and provides peace of mind to CSPs who issue these pieces.

Fully Adaptive WiFi and Smart Home Solutions

In many ways, Plume has positioned itself fully as an adaptive WiFi and smart home solutions provider. It has brought peace and harmony to the device-integration madness without stepping on anybody’s toes. Consequently, over 22 million homes are being powered by Plume.

The silicon shortage will inevitably bring more manufacturing bottlenecks and higher prices for the consumer. Users won’t want to replace their hardware as often, and the smart home will have to adapt. That’s where OpenSync comes in.

When all players are using a common layer of software, users can mix and match to customize their experience.

Image Credit: fauxels; pexels

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.


Building a (Big) Data Pipeline the Right Way – ReadWrite



Building a (Big) Data Pipeline

Gathering and analyzing data has been the craze of business for quite some time now. Yet, too often, the former takes hold of companies at such strength that no care is given to the thought of utilizing data. There’s a reason we had to invent a name for this phenomenon – “dark data.”

Unfortunately, data is often gathered without a good reason. It’s understandable – a lot of internal data is collected by default. The current business climate necessitates using many tools (e.g., CRMs, accounting logs, billing) that automatically create reports and store data.

The collection process is even more expansive for digital businesses and often includes server logs, consumer behavior, and other tangential information.

Building a (Big) Data Pipeline the Right Way

Unless you’re in the data-as-a-service (DaaS) business, simply collecting data doesn’t bring any benefit. With all the hype surrounding data-driven decision-making, I believe many people have lost sight of the forest for the trees. Collecting all forms of data becomes an end in itself.

In fact, such an approach is costing the business money. There’s no free lunch – someone has to set up the collection method, manage the process, and keep tabs on the results. That’s resources and finances wasted. Instead of striving for the quantity of data, we should be looking for ways to lean out the collection process.

Humble Beginnings

Pretty much every business begins its data acquisition journey by collecting marketing, sales, and account data. Certain practices such as Pay-Per-Click (PPC) have proven themselves to be incredibly easy to measure and analyze through the lens of statistics, making data collection a necessity. On the other hand, relevant data is often produced as a byproduct of regular day-to-day activities in sales and account management.

Businesses have already caught on that sharing data between marketing, sales, and account management departments may lead to great things. However, the data pipeline is often clogged, and the relevant information is only accessed abstractly.

Often, the way departments share information lacks immediacy. There is no direct access to data; instead, it’s being shared through in-person meetings or discussions. That’s just not the best way to do it. On the other hand, having consistent access to new data may provide departments with important insights.

Interdepartmental Data

Rather unsurprisingly, interdepartmental data can improve efficiency in numerous ways. For example, data on the Ideal Customer Profile (ICP) leads between departments will steer to better sales and marketing practices (e.g., a more defined content strategy).

Here’s the burning issue for every business that collects a large amount of data: it’s scattered. Potentially useful information is left all over spreadsheets, CRMs, and other management systems. Therefore, the first step should be not to get more data but to optimize the current processes and prepare them for use.

Combining Data Sources

Luckily, with the advent of Big Data, businesses have been thinking through information management processes in great detail. As a result, data management practices have made great strides in the last few years, making optimization processes a lot simpler.

Data Warehouses

A commonly used principle of data management is building a warehouse for data gathered from numerous sources. But, of course, the process isn’t as simple as integrating a few different databases. Unfortunately, data is often stored in incompatible formats, making standardization necessary.

Usually, data integration into a warehouse follows a 3-step process – extraction, transformation, load (ETL). There are different approaches; however, ETL is most likely the most popular option. Extraction, in this case, means taking the data that has already been acquired from either internal or external collection processes.

Data transformation is the most complex process of the three. It involves aggregating data from various formats into a common one, identifying missing or repeating fields. In most businesses, doing all of this manually is out of the question; therefore, traditional programming methods (e.g., SQL) are used.

Loading — Moving to the Warehouse

Loading is basically just moving the prepared data to the warehouse in question. While it’s a basic process of moving data from one source to another, it’s important to note that warehouses do not store real-time information. Therefore, separating operational databases from warehouses allows the former to separate as a backup and avoid unnecessary corruption.

Data warehouses usually have a few critical features:

  • Integrated. Data warehouses are an accumulation of information from heterogeneous sources into one place.
  • Time variant. Data is historical and identified as from within a particular time period.
  • Non-volatile. Previous data is not removed when newer information is added.
  • Subject oriented. Data is a collection of information based on subjects (personnel, support, sales, revenue, etc.) instead of being directly related to ongoing operations.

External Data to Maximize Potential

Building a data warehouse is not the only way of getting more from the same amount of information. They help with interdepartmental efficiency. Data enrichment processes might help with intradepartmental efficiency.

Data enrichment from external sources

Data enrichment is the process of combining information from external sources with internal ones. Sometimes, enterprise-level businesses might be able to enrich data from purely internal sources if they have enough different departments.

While warehouses will work nearly identical for almost any business that deals with large volumes of data, each enrichment process will be different. This is because enrichment processes are directly dependent on business goals. Otherwise, we would go back to square one, where data is being collected without a proper end-goal.

Inbound lead enrichment

A simple approach that might be beneficial to many businesses would be inbound lead enrichment. Regardless of the industry, responding quickly to requests for more information has increased the efficiency of sales. Enriching leads with professional data (e.g., public company information) would provide an opportunity to automatically categorize leads and respond to those closer to the Ideal Customer Profile (ICP) faster.

Of course, data enrichment need not be limited to sales departments. All kinds of processes can be empowered by external data – from marketing campaigns to legal compliance. However, as always, specifics have to be kept in mind. All data should serve a business purpose.


Before treading into complex data sources, cleaning up internal processes will bring greater results. With dark data comprising over 90% of all data collected by businesses, it’s better at first to look inwards and optimize the current processes. Including more sources will exile some potentially useful information due to inefficient data management practices.

After creating robust systems for data management, we can move on to gathering complex data. We can then be sure we won’t miss anything important and be able to match more data points for valuable insights.

Image Credit: rfstudio; pexels; thank you!

Julius Cerniauskas

CEO at Oxylabs

Julius Cerniauskas is Lithuania’s technology industry leader & the CEO of Oxylabs, covering topics on web scraping, big data, machine learning & tech trends.

Continue Reading


Experiments in Fast Image Recognition on Mobile Devices – ReadWrite



feature detection and matching for image recognition

Our journey in experimenting with machine vision and image recognition accelerated when we were developing an application, BooksPlus, to change a reader’s experience. BooksPlus uses image recognition to bring printed pages to life. A user can get immersed in rich and interactive content by scanning images in the book using the BooksPlus app. 

For example, you can scan an article about a poet and instantly listen to the poet’s audio. Similarly, you can scan images of historical artwork and watch a documentary clip.

As we started the development, we used commercially available SDKs that worked very well when we tried to recognize images locally. Still, these would fail as our library of images went over a few hundred images. A few services performed cloud-based recognition, but their pricing structure didn’t match our needs. 

Hence, we decided to experiment to develop our own image recognition solution.

What were the Objectives of our Experiments?

We focused on building a solution that would scale to the thousands of images that we needed to recognize. Our aim was to achieve high performance while being flexible to do on-device and in-cloud image matching. 

As we scaled the BooksPlus app, the target was to build a cost-effective outcome. We ensured that our own effort was as accurate as the SDKs (in terms of false positives and false negative matches). Our solutions needed to integrate with native iOS and Android projects.

Choosing an Image Recognition Toolkit

The first step of our journey was to zero down on an image recognition toolkit. We decided to use OpenCV based on the following factors:

  • A rich collection of image-related algorithms: OpenCV has a collection of more than 2500 optimized algorithms, which has many contributions from academia and the industry, making it the most significant open-source machine vision library.
  • Popularity: OpenCV has an estimated download exceeding 18 million and has a community of 47 thousand users, making it abundant technical support available.
  • BSD-licensed product: As OpenCV is BSD-licensed, we can easily modify and redistribute it according to our needs. As we wanted to white-label this technology, OpenCV would benefit us.
  • C-Interface: OpenCV has C interfaces and support, which was very important for us as both native iOS and Android support C; This would allow us to have a single codebase for both the platforms.

The Challenges in Our Journey

We faced numerous challenges while developing an efficient solution for our use case. But first, let’s first understand how image recognition works.

What is Feature Detection and Matching in Image Recognition?

Feature detection and matching is an essential component of every computer vision application. It detects an object, retrieve images, robot navigation, etc. 

Consider two images of a single object clicked at slightly different angles. How would you make your mobile recognize that both the pictures contain the same object? Feature Detection and Matching comes into play here.

A feature is a piece of information that represents if an image contains a specific pattern or not. Points and edges can be used as features. The image above shows the feature points on an image. One must select feature points in a way that they remain invariant under changes in illumination, translation, scaling, and in-plane rotation. Using invariant feature points is critical in the successful recognition of similar images under different positions.

The First Challenge: Slow Performance

When we first started experimenting with image recognition using OpenCV, we used the recommended ORB feature descriptors and FLANN feature matching with 2 nearest neighbours. This gave us accurate results, but it was extremely slow. 

The on-device recognition worked well for a few hundred images; the commercial SDK would crash after 150 images, but we were able to increase that to around 350. However, that was insufficient for a large-scale application.

To give an idea of the speed of this mechanism, consider a database of 300 images. It would take up to 2 seconds to match an image. With this speed, a database with thousands of images would take a few minutes to match an image. For the best UX, the matching must be real-time, in a blink of an eye. 

The number of matches made at different points of the pipeline needed to be minimized to improve the performance. Thus, we had two choices:

  1. Reduce the number of neighbors nearby, but we had only 2 neighbors: the least possible number of neighbors.
  2. Reduce the number of features we detected in each image, but reducing the count would hinder the accuracy. 

We settled upon using 200 features per image, but the time consumption was still not satisfactory. 

The Second Challenge: Low Accuracy

Another challenge that was standing right there was the reduced accuracy while matching images in books that contained text. These books would sometimes have words around the photos, which would add many highly clustered feature points to the words. This increased the noise and reduced the accuracy.

In general, the book’s printing caused more interference than anything else: the text on a page creates many useless features, highly clustered on the sharp edges of the letters causing the ORB algorithm to ignore the basic image features.

The Third Challenge: Native SDK

After the performance and precision challenges were resolved, the ultimate challenge was to wrap the solution in a library that supports multi-threading and is compatible with Android and iOS mobile devices.

Our Experiments That Led to the Solution:

Experiment 1: Solving the Performance Problem

The objective of the first experiment was to improve the performance. Our engineers came up with a solution to improve performance. Our system could potentially be presented with any random image which has billions of possibilities and we had to determine if this image was a match to our database. Therefore, instead of doing a direct match, we devised a two-part approach: Simple matching and In-depth matching.

Part 1: Simple Matching: 

To begin, the system will eliminate obvious non-matches. These are the images that can easily be identified as not matching. They could be any of our database’s thousands or even tens of thousands of images. This is accomplished through a very coarse level scan that considers only 20 features through the use of an on-device database to determine whether the image being scanned belongs to our interesting set. 

Part 2: In-Depth Matching 

After Part 1, we were left with very few images with similar features from a large dataset – the interesting set. Our second matching step is carried out on these few images. An in-depth match was performed only on these interesting images. To find the matching image, all 200 features are matched here. As a result, we reduced the number of feature matching loops performed on each image.

Every feature was matched against every feature of the training image. This brought down the matching loops down from 40,000 (200×200) to 400 (20×20). We would get a list of the best possible matching images to further compare the actual 200 features.

We were more than satisfied with the result. The dataset of 300 images that would previously take 2 seconds to match an image would now take only 200 milliseconds. This improved mechanism was 10x faster than the original, barely noticeable to the human eye in delay.

Experiment 2: Solving the Scale Problem

To scale up the system, part 1 of the matching was done on the device and part 2 could be done in the cloud – this way, only images that were a potential match were sent to the cloud. We would send the 20 feature fingerprint match information to the cloud, along with the additional detected image features. With a large database of interesting images, the cloud could scale.

This method allowed us to have a large database (with fewer features) on-device in order to eliminate obvious non-matches. The memory requirements were reduced, and we eliminated crashes caused by system resource constraints, which was a problem with the commercial SDK. As the real matching was done in the cloud, we were able to scale by reducing cloud computing costs by not using cloud CPU cycling for obvious non-matches.

Experiment 3: Improving the Accuracy

Now that we have better performance results, the matching process’s practical accuracy needs enhancement. As mentioned earlier, when scanning a picture in the real world, the amount of noise was enormous.

Our first approach was to use the CANNY edge detection algorithm to find the square or the rectangle edges of the image and clip out the rest of the data, but the results were not reliable. We observed two issues that still stood tall. The first was that the images would sometimes contain captions which would be a part of the overall image rectangle. The second issue was that the images would sometimes be aesthetically placed in different shapes like circles or ovals. We needed to come up with a simple solution.

Finally, we analyzed the images in 16 shades of grayscale and tried to find areas skewed towards only 2 to 3 shades of grey. This method accurately found areas of text on the outer regions of an image. After finding these portions, blurring them would make them dormant in interfering with the recognition mechanism. 

Experiment 4: Implementing a Native SDK for Mobile

We swiftly managed to enhance the feature detection and matching system’s accuracy and efficiency in recognizing images. The final step was implementing an SDK that could work across both iOS and Android devices like it would have been if we implemented them in native SDKs. To our advantage, both Android and iOS support the use of C libraries in their native SDKs. Therefore, an image recognition library was written in C, and two SDKs were produced using the same codebase. 

Each mobile device has different resources available. The higher-end mobile devices have multiple cores to perform multiple tasks simultaneously. We created a multi-threaded library with a configurable number of threads. The library would automatically configure the number of threads at runtime as per the mobile device’s optimum number.


To summarize, we developed a large-scale image recognition application (used in multiple fields including Augmented Reality) by improving the accuracy and the efficiency of the machine vision: feature detection and matching. The already existing solutions were slow and our use case produced noise that drastically reduced accuracy. We desired accurate match results within a blink of an eye.

Thus, we ran a few tests to improve the mechanism’s performance and accuracy. This reduced the number of feature matching loops by 90%, resulting in a 10x faster match. Once we had the performance that we desired, we needed to improve the accuracy by reducing the noise around the text in the images. We were able to accomplish this by blurring out the text after analyzing the image in 16 different shades of grayscale. Finally, everything was compiled into the C language library that can be used with iOS and Android.

Anand Shah

Ignite Solutions

Founder and CEO of Ignite Solutions, Anand Shah is a versatile technologist and entrepreneur. His passion is backed by 30 years of experience in the field of technology with a focus on startups, product management, ideation, lean methods, technology leadership, customer relationships, and go-to-market. He serves as the CTO of several client companies.

Continue Reading


DDoS Can Cripple a Blockchain, What Does This Mean to the Cryptocurrency Ecosystem – ReadWrite



DDoS Can Cripple a Blockchain, What Does This Mean to the Cryptocurrency Ecosystem - ReadWrite

Over two decades old, blockchain has become the actual foundation for mining, security, and the creation of cryptocurrency. It is dependable, trusted, and widely used for multiple forms of digital currency around the world.

DDoS Can Cripple a Blockchain

Merely due to its digital nature, blockchain is susceptible to attack and exploitation. One of the most dangerous threats to blockchain is distributed denial of service attacks.

However, even when vulnerabilities exist, networks and users can find ways to prevent harm to blockchain transactions and information.

As we discuss here — you will want to protect yourself and your organization proactively.

What is Blockchain and How Does This Technology Work?

While complicated, blockchain generally boils down to a specific type of database. This is a way to store information in blocks chained together. These blocks are chronological in order and increase as fresh data comes into the blockchain.

The blockchain type of data transaction has no central authority and provides group access through decentralization.

Decentralized blockchains are irreversible transactions, so once the data is within the database, it cannot be changed.

Blockchain is a trustworthy transaction, secure from outside sources, and moves quickly throughout various networks worldwide.

Unlike other forms of currency, there is no physical representation to a blockchain, as it is only data. However, it can also store the history of cryptocurrency transactions, legally binding contracts, and inventories of various products.

How Blockchain is Built-In Bitcoin Mining

Cryptocurrency mining occurs through a process with computers solving intricate mathematic problems.

The mining of bitcoin and other cryptocurrencies occurs through these processes, and transactions combine with similar ones and are then transmitted to all nodes. This enacts an update to the associated ledgers.

New currency is possible through rewards given once the computers solve the mathematical computations. The mining involved with bitcoin creates blocks of data with these transactions, which eventually create blockchains. These are large and long sequences of mined transactional data.

The nodes will confirm trusted data and verify the information within the blockchain. Through checks and balances with these processes, the blockchain can consistently maintain integrity. The inherent integrity in the system ensures trust in the bitcoin mined through the blockchain.

Why is Blockchain Considered the Base of Cryptocurrency “Unhackable?”

Blockchain utilizes multiple sources of defense against hackers to prevent attacks and to assist in reducing the costs of damage once a cyber threat like a DDoS attack occurs.

Blockchain was once considered unshakable because the data is immutable or irreversible once entered. In addition, it is decentralized with no individual authority such as a bank or government. Additional protocols also identify and report potential threats to the blockchain in use.

The decentralization specifically reduces various risks to the data and limits fees with transactions and processing of the data. Blockchain accomplishes this by spreading processing power over multiple computers in a network.

How a DDoS attack can Cripple Blockchain

Related to the DDoS attacks on EXMO, blockchain has some vulnerabilities regarding DDoS attacks. These include the following:

  1. Blockchain nodes: Blockchains exist on nodes that update with the latest data and are all connected.
  2. Smart contracts: Blockchain programs run through predetermined conditions met within the blockchain.
  3. Consensus mechanisms: There are three consensus mechanisms. The Proof of Work or PoW, the Proof of Stake or PoS. and the Delegated Proof of Stake or DPoS. These all generally confirm the deletion of equivalent data within the blockchain.
  4. Wallets: A digital wallet provides access to, storage, and management of cryptocurrency such as bitcoin.

The 51 percent attack happens when a hacker or other malicious user injects data with over 50 percent of the network processing power or hash rate. When the attacker does this, it is possible to overtake chains of data that do not go to the intended party.

Another side effect is the hacker can copy the data and add it to the chan. This then will delete previous information, so the block never saw it.

The perpetrator of the attack will use a DDoS to infiltrate some of these vulnerabilities and steal access to blockchain data and transactions such as bitcoin. For example, 51 percent of attacks (seba dot swiss) led to losses of over $1.1 million with Ethereum Classic in 2019, nearly $2 million with Verge in 2018, and another 2018 Verge attack with just over $1 million.

What is a DDoS Attack, and How Does it Target Blockchain and Make it Prone to Hacking?

Someone with malicious intent such as a hacker can flood a server, single network, or multiple networks with various requests or additional traffic leading to a Distributed Denial of Service or DDoS. Usually, the individual or group responsible wants to either slow the system or cause an entire collapse.

Once a DDoS starts on one computer, it will spread to others in the same network, leading to catastrophic failure.

The vulnerabilities of blockchain through nodes, contracts, or wallets can lead to overutilization of processing power within the server or network. The overutilization then causes a loss of connectivity with cryptocurrency exchanges or other applications connected at the time.

The perpetrator of these attacks can start by tracking IP addresses with specific locations around the world.

The DDoS attacks on EXMO led to the British servers going offline. This caused the website to go down and the servers’ inability to run during the aftermath of the attack. Additionally, hot wallets were compromised during this event, and the perpetrators withdrew five percent of all assets involved.

EXMO explained that they would cover all losses after suspending every withdrawal at the time. This led to new infrastructure development with a separate server for hot wallets.

What Does This Mean to the Crypto-Market?

The Crypto-Market often fluctuates. Based on word of mouth, the value changes, information that can help or harm the influence of various cryptocurrencies, and damage through DDoS attacks that can lead to financial losses. Because these malicious users can cause websites, servers, or networks to go offline for indeterminate amounts of time, the Crypto-Market can see dips in investment and reliability of financial transactions.

Often, after a DDoS attack, the blockchain development may change focus or utilize new techniques that decrease the possibility of vulnerabilities.

After a DDoS Attack, What is its Implication for the Crypto Market and Bitcoin Specifically?

Cryptocurrency markets grew from $19 billion to $602 billion from the beginning of 2017 to the end of 2017. Volume trade with these markets and even the negative effects of DDoS attacks are normally mitigated within the same day the damage occurs. However, malicious users can affect the market through Twitter feeds, news through Google Searches, and the status pages of the network.

How Does a DDoS Attack Affect the Bitcoin Ecosystem?

The trade of Bitcoin will fluctuate based on the downtime of the servers or websites associated with the cryptocurrency. In addition, offline websites affect the access to trades, the ability to purchase or sell, and access to Bitcoin.

Additionally, if someone influential says something through social media, the market can suffer a downtrend or an uptrend based on positive or negative reviews of Bitcoin. This generally leads to either more buying of the cryptocurrency or fewer purchases.

Once the market is affected by these trends, the prices will change. For example, mass-selling may occur after a DDoS attack if there are numerous users affected by a loss of financial transactions. This is even possible if the company behind the Bitcoin data reimburses users for these losses.

Can a Cyber-Attack Change the Market from Bull to Bear?

The general statistical trend of the crypto-market appears little affected by the negative effects of DDoS attacks.

Websites are normally back up and online within the same day. Trades, purchasing, and selling cryptocurrency are not usually severely impacted by most cyber-attacks. Bitcoin, in particular, has few patterns that explain the bull and bear rise and fall of prices.

However, multiple cyberattacks targeting one website, server, or network can lead to sustained losses for the company. The loss of faith in the downtime from the attack can lead to a bear market where losses are constant for a period of time.

What Cybersecurity Measures Should be Taken to Prevent a DDoS Attack?

To prevent similar DDoS attacks such as those that occurred on EXMO and other companies, you can put numerous cybersecurity measures in place.

Prevention is key. There are several ways to prevent DDoS and other cybersecurity attacks.

  • Develop a Denial of Service plan by assessing security risks and what to do in case an attack ever occurs.
  • Enhance network infrastructure security for multi-level protection protocols.
  • Minimize user errors and security vulnerabilities.
  • Develop a strong network architecture by focusing on redundant resources within the network and servers.
  • Utilize the cloud to spread out the attack and use multiple environments to prevent damage within the system.
  • Recognize common warning signs of DDoS attacks, such as increased traffic, intermittent connectivity, and a lack of standard performance.
  • Consider investing in DDoS-as-a-Service, which can provide flexibility, third-party resources, and cloud or dedicated hosting on multiple types of servers at the same time.

It is imperative to identify and then respond to attacks in real-time.

By using checkboxes, captcha and other methods on the website, programs and users can discover if the activity is real or a potential threat. In addition, changing response times through automation, recognizing patterns of attack, and implementing defense systems can all provide measures of protection.

Automation of attack detection can reduce DDoS response time against the attack.

The automation method provides near-instant detection for incoming DDoS attacks.

When traffic spikes to untenable levels, automation can redirect traffic through an automated defense system. This system is usually adaptive and can employ various methods if the DDoS event is different from the previous attacks.

Automation can identify patterns in traffic by sifting through a large amount of data quickly. This can provide real-time solutions during the attack. The defense system of automation can also access IP blocklists and weapons to protect certain zones of information.

Automated defense systems provide adaptable solutions for the ever-evolving hacker intent on stealing data. With real-time updates and access to lists the company or user may not have, network administrators can implement strategies to prevent or mitigate the damage caused at the attack time.

New Trends in Protecting Against DDoS Attacks

A new cybersecurity trend involves awarding cryptocurrency to users that spot irregular activity and report the issue. Previous and continuing trends involve tracking the deviation in traffic. Some companies will use software to analyze answers to queries, determine if transactions are legitimate, and evaluate if processes are in line with true activity. This can single out bots or malicious traffic.

Pattern recognition is important when determining whether a DDoS attack is underway.

Companies can use machine learning technology to detect irregular patterns. For example, a query can help to determine which IP addresses, timeframes, or accounts are affecting the network during a DDoS attack.

Do this early — and well to guard against attacks.

Another trend to guard against DDoS attacks identifying IP addresses commonly associated with DDoS attacks and blocking them.

Some companies use forensic tools after a data breach or DDoS attack to determine how the attack occurred and how to respond to a similar one in the future. This may involve using programs and encrypted recorded logs to review later.


It is vital to stay alert to potential threats. By always being prepared for potential disasters, you should be able to prevent catastrophe.

Having a plan in place when the attack happens can limit response time to prevent the website or network from going offline.

To accomplish these goals, you can implement stronger cybersecurity measures and invest in resources that recognize various DDoS patterns and alert users immediately to take direct action.  These proactive steps can help protect blockchain data and keep cryptocurrency from falling into malicious hands.

Ben Hartwig

Ben is a Web Operations Director at InfoTracer who takes a wide view from the whole system. He authors guides on entire security posture, both physical and cyber. Enjoys sharing the best practices and does it the right way!

Continue Reading

Copyright © 2020 Diliput News.