astra-sites domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131cookie-law-info domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131essential-blocks domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131rank-math domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131wp-bulk-delete domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131insert-headers-and-footers domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131add-search-to-menu domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131ultimate-blocks domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131essential-addons-for-elementor-lite domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131jetpack domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131loginizer domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131rank-math domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131ultimate-addons-for-gutenberg domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131wpforms-lite domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131email-subscribers domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131shortpixel-adaptive-images domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131shortpixel-image-optimiser domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131astra domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131astra-addon domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131Cloud computing has revolutionized the way businesses operate and manage their data and applications. The three major players in the cloud computing market are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Each of these cloud platforms offers a range of services and features, making it important to consider your specific needs and requirements before choosing one.
Amazon Web Services (AWS) is the market leader in cloud computing and provides a comprehensive suite of cloud computing services. AWS offers a range of services, including computing, storage, databases, security, analytics, and artificial intelligence. AWS also provides a range of tools for developers and IT professionals, making it easy to build, deploy, and manage applications. With its global infrastructure, AWS offers low latency and high performance, making it a popular choice for businesses of all sizes.
Microsoft Azure is a cloud computing platform that provides a range of services for businesses and developers. Azure offers a range of services, including virtual machines, storage, databases, and security. It also provides a range of tools for developers, including the Azure DevOps suite, which helps streamline development workflows. Azure also integrates well with other Microsoft products, such as Office 365 and Dynamics 365, making it a good choice for businesses that are already using these products.
Google Cloud Platform (GCP) is a cloud computing platform that provides a range of services for businesses and developers. GCP offers a range of services, including virtual machines, storage, databases, security, and artificial intelligence. GCP also provides a range of tools for developers, including the Google Cloud SDK, which makes it easy to build and deploy applications. GCP is also known for its high-performance infrastructure and innovative technologies, making it a popular choice for businesses looking for cutting-edge solutions.
| AWS – Amazon Webservices | AZURE – Microsoft Azure | GCP – Google Cloud Platform | ||
| 1 | Market Share | Has the largest market share among the three | Has a strong market presence and growing at a fast pace | Has a relatively smaller market share compared to AWS and Azure |
| 2 | Geographical Presence | Has a large global footprint with a presence in many regions | Has a strong presence in Europe, the US, and Asia | Has a growing presence, with a focus on the US and Europe |
| 3 | Cost | Known for its flexible pricing and cost optimization options | Offers cost-effective solutions and discounts for long-term commitments | Offers competitive pricing, but requires upfront investments for some services |
| 4 | Hybrid Cloud Capabilities | Offers hybrid cloud solutions through its Outposts product | Provides hybrid cloud solutions through Azure Stack | Has limited hybrid cloud options, but is developing Anthos as a hybrid cloud solution |
| 5 | Compute | Offers a wide range of compute options, including EC2, Elastic Beanstalk, and Lambda | Provides a variety of compute options, including VMs, App Service, and Functions | Offers compute services, including Compute Engine, Kubernetes Engine, and Cloud Functions |
| 6 | Storage | Offers a range of storage solutions, including S3, EBS, and Glacier | Provides storage options, including Blob, Disk, and File Storage | Offers storage options, including Cloud Storage, Persistent Disk, and Cloud SQL |
| 7 | Database | Offers a range of databases, including RDS, DynamoDB, and Redshift | Provides database options, including SQL Database, Cosmos DB, and Azure Database for MySQL | Offers database services, including Cloud SQL, Firestore, and Bigtable |
| 8 | Machine Learning | Offers a range of machine learning services, including SageMaker, Rekognition, and DeepRacer | Provides machine learning services, including Azure ML, Cognitive Services, and Databricks | Offers machine learning services, including AutoML, TensorFlow, and AI Platform |
| 9 | Analytics | Offers analytics services, including QuickSight, Kinesis, and CloudWatch | Provides analytics services, including Power BI, Stream Analytics, and HDInsight | Offers analytics services, including BigQuery, Dataflow, and Cloud Data Loss Prevention |
| 10 | Networking | Offers a range of networking services, including VPC, Direct Connect, and Route 53 | Provides networking services, including Virtual Network, ExpressRoute, and Load Balancer | Offers networking services, including Virtual Private Cloud, Cloud Interconnect, and Cloud DNS |
| 11 | Security | Offers a range of security services, including IAM, KMS, and GuardDuty | Provides security services, including Azure AD, Key Vault, and Security Center | Offers security services, including Cloud Identity, Key Management Service, and Security Command Center |
| 12 | DevOps | Offers DevOps tools, including CodeCommit, CodeBuild, and CodeDeploy | Provides DevOps tools, including Azure DevOps, Container Registry, and Kubernetes Service | Offers DevOps tools, including Cloud Build, Cloud Source Repositories, and Stackdriver |
| 13 | Containers | Offers container services, including ECS, Fargate, and Elastic Kubernetes Service | Provides container services, including AKS, Container Instances, and Service Fabric | Offers container services, including GKE, Cloud Run, and Cloud Functions |
| 14 | Serverless Computing | Offers serverless computing options, including Lambda, API Gateway, and Step Functions | Provides serverless computing options, including Functions, Event Grid, and Logic Apps | Offers serverless computing options, including Cloud Functions, Cloud Run, and Cloud Pub/Sub |
| 15 | IoT | Offers IoT services, including IoT Core, Greengrass, and IoT Analytics | Provides IoT services, including IoT Hub, IoT Edge, and IoT Central | Offers IoT services, including IoT Core, Cloud IoT Edge, and Cloud IoT Core |
| 16 | Blockchain | Offers blockchain services, including Managed Blockchain and Quantum Ledger Database | Provides blockchain services, including Azure Blockchain Service and Ethereum on Azure | Offers blockchain services, including Blockchain ETL and Chainlink on Google Cloud |
| 17 | VR/AR | Offers VR/AR services, including Sumerian and Amazon Polly | Provides VR/AR services, including Spatial Anchors and Remote Rendering | Offers VR/AR services, including Poly and Tilt Brush |
| 18 | Multimedia | Offers multimedia services, including Transcribe, Translate, and Polly | Provides multimedia services, including Speech Services and Cognitive Services | Offers multimedia services, including Speech-to-Text and Text-to-Speech |
| 19 | Big Data | Offers big data services, including EMR, Kinesis, and Glue | Provides big data services, including HDInsight, Data Lake Storage, and Databricks | Offers big data services, including BigQuery, Dataproc, and Cloud Dataflow |
| 20 | Management & Governance | Offers management and governance tools, including CloudFormation, CloudTrail, and CloudWatch | Provides management and governance tools, including Azure Policy, Azure Monitor, and Azure Cost Management | Offers management and governance tools, including Stackdriver, Cloud Deployment Manager, and Cloud Billing |
| 21 | Integration & APIs | Offers integration and API services, including API Gateway, AppSync, and EventBridge | Provides integration and API services, including Azure API Management, Logic Apps, and Event Grid | Offers integration and API services, including Cloud Endpoints, Apigee, and Cloud Functions |
| 22 | Business Applications | Offers a range of business applications, including WorkDocs, WorkMail, and Connect | Provides business applications, including Power Apps, Power Automate, and Power BI | Offers business applications, including G Suite, Google Workspace, and Google Data Studio |
| 23 | Artificial Intelligence | Offers AI services, including SageMaker, Rekognition, and Comprehend | Provides AI services, including Cognitive Services, Bot Service, and Machine Learning | Offers AI services, including Dialogflow, AutoML, and Vision API |
| 24 | Security & Compliance | Offers security and compliance tools, including IAM, KMS, and GuardDuty | Provides security and compliance tools, including Azure Active Directory, Azure Security Center, and Azure Information Protection | Offers security and compliance tools, including Identity and Access Management, Cloud Key Management Service, and Security Command Center |
| 25 | Compliance Standards | Meets compliance standards such as SOC 1, SOC 2, SOC 3, PCI DSS, and HIPAA | Meets compliance standards such as SOC 1, SOC 2, SOC 3, PCI DSS, and HIPAA | Meets compliance standards such as SOC 1, SOC 2, SOC 3, PCI DSS, and HIPAA |
| 26 | Cost | Pricing model is based on a pay-as-you-go approach and can vary depending on the services used | Pricing model is also based on a pay-as-you-go approach, with the option to purchase reserved instances | Pricing model is based on a pay-per-use approach, with custom and flexible pricing options |
| 27 | Support & Services | Offers a range of support plans, including developer, business, and enterprise support | Provides a range of support plans, including developer, standard, and premier support | Offers a range of support plans, including premium, standard, and basic support |
| 28 | Global Presence | Has a global presence, with a large number of data centers worldwide | Also has a global presence, with data centers in several regions worldwide | Has a growing global presence, with data centers located in multiple regions around the world |
| 29 | Documentation & Community | Offers comprehensive documentation and has a large community of users | Provides detailed documentation and has a growing community of users | Offers extensive documentation and has a growing community of users and supporters |
| 30 | Hybrid & Multi-cloud | Supports hybrid and multi-cloud solutions, including Outposts and Snowball Edge | Provides hybrid and multi-cloud solutions, including Azure Stack and Azure Arc | Supports hybrid and multi-cloud solutions, including Anthos and Cloud Services Platform |
In conclusion, each of these cloud platforms has its strengths and weaknesses, and the best choice for your business will depend on your specific needs and requirements. AWS is a good choice for businesses that are looking for a comprehensive suite of cloud computing services. Azure is a good choice for businesses that are already using other Microsoft products. GCP is a good choice for businesses looking for cutting-edge solutions and innovative technologies. It is important to consider your specific needs and requirements before choosing a cloud platform and to consult with a cloud computing expert if you need additional guidance.
]]>Artificial intelligence is a branch of computer science. It attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence. Regarding the emerging industry of artificial intelligence AI, many people have many misunderstandings. Today, the editor has collected some knowledge about artificial intelligence AI for everyone to popularize science.
Encyclopedia Directory
• What is artificial intelligence
• Seven misunderstandings about artificial intelligence AI
• Artificial intelligence in the future
If you are a business executive (rather than a data scientist or machine learning expert), you may have been exposed to artificial intelligence from mainstream media reports. You may have read related articles in “The Economist” and “Forbes”, or read stories about Tesla’s autonomous driving, or Stephen Hawking’s article about the threat of AI to humans, and even read about artificial intelligence and human intelligence. Caricature. Therefore, if you are an executive who cares about the development of your company, these media reports about AI may raise two annoying questions: First, is the commercial potential of AI true or false? Second, how can AI be applied to my products? The answer to the first question is yes, AI has commercial potential. Today, companies can use AI to change automated operating processes that require human intelligence. AI can increase the workload of human-intensive companies by 100 times while reducing unit economic efficiency by 90%. It takes a little more time to answer the second question. First, we must eliminate the AI myth promoted by mainstream media. Only by eliminating these misunderstandings can you have a framework for how to apply AI to your business.
Seven misunderstandings about artificial intelligence AI
Myth 1: AI is magic
Many mainstream media describe AI as magical as if we only need to applaud senior magicians from big companies such as Google, Facebook, Apple, Amazon, and Microsoft. This description is unhelpful. If we want companies to adopt AI, then we need to make entrepreneurs understand AI. AI is not magic. AI is data, mathematics, model, and iteration. In order for AI to be accepted by enterprises, we need to be more transparent. The following are explanations of 3 key concepts related to AI:
Training data (TD): Training data is the initial data set for machine learning. Training data includes input and pre-answer output, so machine learning models can find patterns for any given output. For example, the input can be a customer support ticket with an email thread between the customer and a corporate support representative (CSR), and the output can be a category label from 1 to 5 based on a company-specific category definition.
Machine Learning (ML): Machine learning is software that can learn patterns from training data and apply these patterns to new input data. For example, when you receive a new customer support ticket with an email thread between the customer and the CSR, the machine learning model can predict its classification and tell you its confidence in the prediction. The main feature of machine learning is that it learns new rather than applying inherent rules. Therefore, it can adjust its rules by digesting new data.
Human-in-the-Loop (HITL): Human-in-the-Loop is the third core element of AI. We cannot expect machine learning models to be absolutely reliable. A good machine learning model may only have 70% accuracy. Therefore, when the confidence of the model is low, people need to use the Human-in-the-Loop workflow.
So, don’t be fooled by the myth that AI is magic. The basic formula for understanding AI is: AI=TD+ML+HITL.
Myth 2: AI is only for the technical elite
Media reports can easily give people the illusion that AI only belongs to the technical elite-big companies such as Amazon, Apple, Facebook, Google, IBM, Microsoft, Salesforce, Tesla, Uber-only they can form a large team of machine learning experts, and receive a billion-dollar investment. This concept is wrong.
Today, you can start applying AI to your business without $100,000. So, if you are one of the 26,000 companies in the U.S. whose revenue is greater than $50 million, you can invest 0.2% of the revenue in AI applications.
Therefore, AI is not exclusive to technical elites. It belongs to every business.
Myth 3: AI is only to solve billion-dollar problems
Mainstream media tend to report on futuristic things, such as self-driving cars or unmanned aircraft used to deliver express delivery. Companies like Google, Tesla, and Uber have invested tens of billions of dollars in order to seize the leading position in the future driverless car market due to the “winner takes all” mentality. These give the impression that AI is only used to solve new problems at the billion-dollar level. But this is another mistake.
AI is also used to solve existing smaller problems, such as million-dollar problems. Let me explain: The core requirement of any company is to understand the customer. This is the case from the agora market in ancient Greece and the personal trading square in ancient Rome. This is also true today, even if business transactions have moved explosively to the Internet. Many companies sit on a treasure trove of unstructured data from customers, which comes from email threads or Twitter comments. AI can be applied to these classifications to support ticket challenges, or to understand the sentiment of tweets.
Therefore, AI can not only be applied to new and exciting problems at the billion-dollar level, such as self-driving cars. AI is also used for existing “uninteresting” small problems, such as better understanding of customers by supporting ticket classification or social media sentiment analysis.
Myth 4: Algorithms are more important than data
Reports on AI in mainstream media tend to believe that machine learning algorithms are the most important element. They seem to equate algorithms with the human brain. They imply that it is the algorithm that makes the magic work, and that more sophisticated algorithms can surpass the human brain. Reports about machines defeating humans in Go and Chess are examples. The media is concerned about “deep neural networks”, “deep learning” and how machines make decisions.
Such reports may give companies the impression that if they want to apply AI, they must first hire machine learning experts to build a perfect algorithm. But if companies do not consider how to obtain higher quality and larger amounts of customized training data for machine learning models to learn, even with perfect algorithms, they may not get the desired results (“We have great algorithms” and “We The model only has an accuracy of 60%.”
Buying commercial machine learning services from companies such as Microsoft, Amazon, and Google without a training data plan or budget is like buying a car and failing to reach the gas station. You just bought a large piece of very expensive metal. The analogy between cars and gasoline is not appropriate, because if you give a machine learning model more training data, the better the model will become. It’s like every time a car runs out of gasoline, the greater the mileage accumulated. So training data is even more important than gasoline. Therefore, the quality and quantity of training data are at least as important as algorithms.
Myth 5: Machines>People
For the past 30 years, the media has always liked to describe AI as a machine that is stronger than humans, such as Schwarzenegger in “Terminator” and Alicia in “Ex Machina” Vikander. It is understandable for the media to do so, because the media wants to establish a simple narrative structure of who will win between the machine and the human. However, this does not match the actual situation.
For example, the recent news that Google’s DeepMind/AlphaGo defeated Li Shishi was simply described by the media as the victory of machines over humans. This is inaccurate, and the real situation is not so simple. A more accurate description should be “the machine unites many people to defeat one person”.
The core reason for eliminating this misunderstanding is that machines and humans have complementary capabilities. Please see the picture above. The machine’s specialty is processing structured calculations, and they will perform well on the task of “finding feature vectors”. Humans’ specialty is to understand meaning and context. They perform well on the task of “finding leopard print dresses”, and it is not so easy for humans to do the task of “finding feature vectors”.
Therefore, the correct framework for enterprises is to realize the complementarity of machines and humans, and AI is the joint work of machines and humans.
Myth 6: AI is the replacement of humans by machines
The mainstream media like to portray the future of dystopia because they think it can attract attention. This may indeed attract readers’ attention, but it does not help to truly understand how machines and humans work together.
For example, let’s go back to the business of enterprise classification to support ticket. In most companies today, this is still a 100% manual process. Therefore, this process is slow and costly, and the number of things that can be done is limited. Suppose you have a model with 70% accuracy after categorizing 10,000 support tickets. The result is wrong 30% of the time, but then Human-in-the-loop can intervene. You can set the acceptable confidence level to 95% and only accept output results with a confidence level of 95% or higher. Then the machine learning model can only do a small part of the work initially, such as 5%-10%. But when the model gets new artificially labeled data, it can learn and improve. Therefore, as time goes by, the model can handle more customer support ticket classification work, and enterprises can also greatly increase the number of classified tickets.
Therefore, the combination of machines and people can increase the workload while maintaining quality and reducing the unit economic benefits of important businesses. This eliminates the AI myth that machines replace humans. The truth is that AI is a machine that strengthens humans.
Myth 7: AI=ML
The last myth about AI in mainstream media is that artificial intelligence and machine learning are treated as the same thing. This may make corporate management think that as long as they buy a commercial machine learning service from Microsoft, Amazon, or Google, they can turn AI into products.
To implement an AI solution, in addition to machine learning, you also need training data, which requires human-in-the-loop. Machine learning without training data is like a car without gasoline. Although it is expensive, it can’t go anywhere. The lack of human-in-the-loop machine learning can also lead to undesirable consequences. You need people to overturn the low-confidence predictions of machine learning models.
Artificial intelligence in the future
There will definitely be a big change in transportation, from the current manual driving to the future unmanned driving. Now in Silicon Valley, the United States can often see those unmanned vehicles put into use, not only unmanned cars, aircraft can also use unmanned technology to soar in the sky, are you hungry, commercial use of small drones Food delivery has already begun. So there will be a big change in traffic. There will certainly be a huge gap in medical treatment. Artificial intelligence will automate diagnosis by automatically browsing the user’s condition. At the same time, wearable medical devices and mobile applications can enable us to go further in the future of artificial intelligence medical treatment. It can also be further improved in terms of wheelchairs and intelligent bones. In terms of security, artificial intelligence must also be an indispensable part of the future. In the future, artificial intelligence will become a very important part of public security, whether it is from the face recognition technology on surveillance or the robot police judge in the future, it will have an important position. At present, face recognition technology has been used in most of cameras, which is of great help to the police in finding suspects. It is believed that artificial intelligence will be of greater help to the police in the future.
]]>ThoughtWorks IPO (TWKS) is soon coming on Nasdaq. Thoughtworks was always a very strong competitor in my career as a salesperson. Thoughtworks is a leader when it comes to digital transformation Microservices agile jobs. It really doesn’t have any competitors when it comes to the expertise in trendsetting and consulting in Microservices. The very principles of Microservices, Agile, DevOps, were formed and contributed by research scientists and employees of Thoughtworks.
Now, why is ThoughtWorks going to be successful in the future? Because these very technologies in which Thoughtworks is an expert are going to be massively in demand for the next 10 to 15 years. Now you would say that what is Microservices why is it important in the next 10 to 15 years?
So read this in this way that all successful services such as Netflix, Amazon, Robinhood, Google, Facebook, Zalando, Groupon,eBay, Uber, Comcast, Soundcloud are based on the technology principle of Microservices which Martin Fowler and James Lewis of Thoughtworks laid the foundation of.
We are soon going to see a massive digital transformation in all types of businesses and platforms which will need the services of companies like Thoughtworks. Thoughtworks is going to be the digital transformation partner for many of the traditional and large businesses. A lot of business that comes to Thoughtworks comes because of its brand name in the consulting space of Microservices and digital transformation. Thoughtworks beats its competition in many deals for such kind of IT consulting work. And it was one of my top toughest competitors during my work as an IT consulting salesperson.
Thoughtworks is on the invitee list as a vendor for the digital transformation of many chief information officers and chief digital officers of large businesses.
The biggest threat which might come to Thoughtworks is from large Indian IT companies, large IT consulting and services companies from the United States and Europe. But given its leadership position, it might hold on a significant market share in such services. Although history and expertise are on the side of Thoughtworks it should not rest on its past laurels and continue to invest in newer technology. It should keep its well-known focus on research and development in areas of setting new technology principles to boost shareholder belief in the company after ThoughtWorks IPO.

Thoughtworks Holdings Inc. TWKS, has set terms of its initial public offering, which could value the Chicago-based technology consultancy company at up to $6.10 billion. A total of 36.84 million shares will be offered in the ThoughtWorks IPO, with the company offering 16.43 million shares and selling shareholders offering 20.41 million shares. n the document, ThoughtWorks revealed that it would offer 36,842,106 shares of common stock at a price per share between $18 and $20. This means ThoughtWorks could raise up to $328.6 million through this IPO. The stock is expected to list on the Nasdaq under the ticker symbol “TWKS.” It has filed paperwork under the name Turing Holding Corp but plans to change its name to Thoughtworks Holding Inc before the completion of the IPO. Goldman Sachs and J.P. Morgan are the lead underwriters for the ThoughtWorks IPO offering.
The 28-year-old firm provides services to companies such as Canadian wireless carrier TELUS Corp, U.S. supermarket chain Kroger Co and payments company PayPal Holdings Inc. It has helped numerous well-known corporations — including Atlassian, Bayer, Sephora, PayPal, Porsche, and more — on their digital transformations and strategies. Thoughtworks integrates strategy, design, and software to help businesses succeed online. It provides premium, end-to-end digital strategy, design, and engineering services to assist companies with their digital transformation. Walmart, PayPal, Kroger, Bayer, and other well-known companies have benefited from the company’s services. Thoughtworks provides its services to over 300 companies. You should consider this list of customers when investing in ThoughtWorks IPO.

The digital transformation, IT consulting, Microservices is almost 400 billion dollar market size item in the technology industry. A vast majority of that might come to Thoughtworks due to its established presence in the consulting space and its recognized leadership in the Microservices arena. The global microservices architecture market size was valued at $2,073 million in 2018 and is projected to reach $8,073 million by 2026, registering a CAGR of 18.6% from 2019 to 2026. Microservices architecture (MSA) is a process of developing software systems in which large monolithic applications are broken down into smaller manageable independent services.
The global digital transformation market size was valued at USD 336.14 billion in 2020 and is expected to grow at a compound annual growth rate (CAGR) of 23.6% from 2021 to 2028. The growing demand for advanced technology, such as the Internet of Things (IoT), across businesses and enterprises, is promoting the adoption of connected devices as well as data-rich and analytics solutions. Moreover, these solutions enable the integration of intelligence into business operations and processes to facilitate improved and effective customer engagements, while driving operational optimization. The increasing use of mobile devices, smartphones, and applications across business processes and departments is also promoting digitization and is expected to drive the market over the forecast period. Shifting from traditional to digitalized business models facilitates the introduction of additional advanced technological products and services across industries and sectors.
Looking at this data we can gauge how much business ThoughtWorks can target and thus make a sound decision on ThoughtWorks IPO.
Digital transformation enables organizations to improve their operational performance, customer experience, brand reputation, and customer retention ratios. Moreover, digitally transformed businesses can efficiently adapt to the changing technological landscape and can address sudden shifts in the industry, especially the one currently brought in by the Covid-19 pandemic; studies suggest that the efficiency and rate of adaptation of digitally transformed businesses to a post-pandemic era are much higher than traditional businesses.
The increasing demand for industrial automation is also one of the key drivers of the market. The rising adoption of wireless communication and other advanced technologies across several businesses and verticals is expected to drive the rate of digital transformation. Several industries, including energy & power, manufacturing, healthcare, and education, are investing in automation solutions to make an immediate and lasting difference in terms of optimization of processes. These industries have begun to shift toward complete digitization by implementing smart systems and IoT sensors across processes. However, some untapped areas continue to remain within businesses and enterprises, along with a few industries and sectors, where the rate of digitization has been traditionally low. For instance, the manufacturing sector, especially across emerging economies, has traditionally been a slow adopter of advanced techniques and technologies. As we mentioned earlier ThoughtWorks is the market leader and this should be taken into consideration before investing in ThoughtWorks IPO.
First proposed by Martin Fowler(Thoughtworks) and James Lewis(Thoughtworks) in 2014, microservices architecture style is a way to develop a single application using a set of small services, each running in its own process and communicating using lightweight mechanisms, usually, HTTP APIs, that are built on business capabilities and can be deployed independently through automated deployment mechanisms, implemented in different programming languages, and different data storage technologies, with minimal centralized management.
In the traditional IT industry, most of the software is piling up a variety of independent systems, the problem of these systems is summed up as poor scalability, reliability is not high, high maintenance costs. However, since SOA used bus mode in the early days, this bus mode is strongly bound to a certain technology stack, such as J2EE (Java enterprise). This results in many enterprises ‘ legacy systems are difficult to connect, the switching time is too long, the cost is too high, the convergence of the stability of the new system also takes some time. In the end, SOA looks beautiful, but it has become an enterprise-class luxury that small and medium-sized companies are afraid of.
In the introduction of microservices, we must first understand what microservices are. As the name suggests, microservices have to be understood from two aspects, what is “micro” and what is “service”. In the narrow sense, the small and famous”2 pizza team” is a good interpretation of this explanation (the 2 pizza team was first proposed by Amazon CEO Bezos, meaning that the design of a single service, all participants from the design, development, testing, operation and maintenance owners add up to only 2 pizzas). The so-called service must be different from the system, service one or a set of relatively small and independent functional units is the user can perceive the minimum set of functions.
]]>First, the introduction of microservices
1. What are Microservices
In the introduction of microservices, we must first understand what microservices are. As the name suggests, microservices have to be understood from two aspects,what is “micro” and what is “service”. In the narrow sense, the small and famous”2 pizza team” is a good interpretation of this explanation (the 2 pizza team was first proposed by Amazon CEO Bezos, meaning that the design of a single service, all participants from the design, development, testing, operation and maintenance owners add up to only 2 pizzas). The so-called service, must be different from the system, service one or a set of relatively small and independent functional units, is the user can perceive the minimum set of functions.
2. Origin of Microservices
First proposed by Martin Fowler and James Lewis in 2014, microservices architecture style is a way to develop a single application using a set of small services, each running in its own process and communicating using lightweight mechanisms, usually HTTP APIs, that are built on business capabilities and can be deployed independently through automated deployment mechanisms, implemented in different programming languages, and different data storage technologies, with minimal centralized management.
3. Why do you need microservices?
In the traditional IT industry, most of the software is piling up a variety of independent systems, the problem of these systems is summed up as poor scalability, reliability is not high, high maintenance costs. However, since SOA used bus mode in the early days, this bus mode is strongly bound to a certain technology stack, such as: J2EE. This results in many enterprises ‘ legacy systems are difficult to connect, switching time is too long, the cost is too high, the convergence of the stability of the new system also takes some time.In the end, SOA looks beautiful,but it has become an enterprise-class luxury that small and medium-sized companies are afraid of.
3.1 Problems caused by the monolithic architecture
The single architecture works well in the case of a relatively small scale, but with the expansion of the scale of the system, it exposes more and more problems, mainly the following points:
1.Complexity gets higher
For example, some projects have hundreds of thousands of lines of code, the difference between the various modules is more vague, the logic is more confusing, the more code complexity, the more difficult to solve the problem encountered.
2.Technological debt is rising
The company’s personnel flow is a normal thing, some employees before leaving, neglect the quality of the self-control, resulting in leaving a lot of errors, due to the huge amount of project code, a error is difficult to find, which brings great trouble to the new employees, the greater the turnover of personnel left more errors, which is the so-called technical debt more and more.
3.Deployment slows down gradually
This is very well understood, the single architecture module is very large, the amount of code is very large, resulting in the deployment of the project takes more and more time, once some projects start to take 10 minutes, what a terrible thing ah, start a few projects a day will pass, leaving developers very little time to develop.
4.Hindering technological innovation
For example, a previous project was written using struts2, due to the inextricably linked between the various modules, the amount of code, the logic is not clear enough, if you want to use spring mvc to refactor this project will be very difficult, the cost will be very large, so more often companies have to bite the bullet to continue to use the old struts architecture, which hinders the innovation of technology.
5.Cannot scale on demand
For example, the movie module is a CPU-intensive module,and the order module is IO-intensive module, if we want to improve the performance of the order module, such as increasing memory, increasing hard disk, but because all modules are in one architecture, so we have to consider other module factors when expanding the performance of the order module, because we can not expand the performance of a module and damage the performance of other modules, and thus can not scale on demand.
3.2 Differences between Microservices and Monolithic Architectures
Each module of microservices is equivalent to a separate project. The amount of code is significantly reduced, and the problem is relatively easy to solve.
Single architecture All modules share a database, the storage mode is relatively single, microservices each module can use a different storage mode (for example, some use redis, some use mysql, etc.), the database is also a single module corresponding to its own database.
Monolithic architecture All module development uses the same technology, microservices each module can use a different development technology, the development mode is more flexible.
3.3 Microservices and SOA Differences
Microservices, in the essence, are SOA architectures.In a microservice system, there can be services written in Java or services written in Python. They are unified into a system by Restful architectural style.So the microservices themselves have nothing to do with the specific technology implementation, and are highly scalable.
4. The Nature of Microservices
Microservices, the key is not just the microservices themselves,but the system should provide a set of basic architecture, which allows microservices to be deployed, run, and upgraded independently. Not only that, the system architecture also allows microservices and microservices to be structurally “loosely coupled”, and functionally expressed as a unified whole.This so-called“unified whole”shows a unified style of interface, unified rights management, unified security policy, unified on-line process, unified log and audit methods, unified scheduling, unified access entry and so on.
The purpose of microservices is to effectively split applications for agile development and deployment.
Microservices promote the idea that inter-team should be inter-operate, not integrate .inter-operate is to define the boundaries and interfaces of the system. In a team full stack, let the team be autonomous, because if the team is formed in such a way, the cost of communication within the system will be maintained, each subsystem will be more cohesive, each other’s dependent coupling can become weak, cross-system communication costs can be reduced.
5. What kind of project is suitable for microservices
Microservices can be divided according to the independence of the business function itself, if the system provides services that are very low-level, such as: operating system kernel, storage system, network system, database system, etc., such systems are low-level, there is a close relationship between functions and functions, if forced to split into smaller service units, will make the integration workload rise sharply, and this artificial cutting can not bring real isolation on the business, so can not be deployed and run independently, it is not suitable for making microservices.
Whether you can make a microservice depends on four elements:
6. Microservice Folding and Design
Moving from a monolithic structure to a microservice architecture will continue to encounter the problem of service boundary division: for example, we have a user service to provide the basic information of the user,so should the user’s avatar and picture, etc. be divided into a new service is better or should it be merged into the user service?If the granularity of the service is too coarse, it is back to the old way of monolithic; if it is too fine, the overhead of inter-service calls becomes negligible, and the difficulty of management increases exponentially.So far, there is no standard that can be called service boundary division, which can only be adjusted according to different business systems
The big principle of splitting is that when a business does not depend on or rarely depends on other services,has independent business semantics, provides data for more than 2 other services or clients, then it should be split into a separate service module.

6.1 Microservice Design Principles
Principle of Single Responsibility
It means that each microservice only needs to implement its own business logic on it, such as the order management module, it only needs to process the business logic of the order on it, and the rest does not need to be considered.
Principles of Service Autonomy
It means that each microservice is independent from development, testing, operation and maintenance, etc., including the stored database is also independent, there is a complete process, we can treat it as a project.Do not have to rely on other modules.
Lightweight Communication Principles
The first is that the language of communication is very lightweight, second, the communication mode needs to be cross-language, cross-platform, cross-language is to make each microservice has enough independence, can not be controlled by technology.
Clear principles of interfaces
Since there may be invocation relationships between microservices, in order to try to avoid future adjustments due to changes in the interface of a microservice, it is necessary to take into account all situations at the beginning of the design, so that the interface is as common and flexible as possible, so as to avoid other modules also making adjustments.
Each microservice can run independently in its own process;
A series of independently running microservices work together to build the entire system;
Each service is developed as a separate business, and a microservice generally completes a specific function,such as: order management, user management, etc;
Microservices communicate through lightweight communication mechanisms,such as calls via REST APIs or RPC.
Easy to develop and maintain
Since a single module of microservices is equivalent to a project, the development of this module we only need to care about the logic of this module, the amount of code and logical complexity will be reduced, so that it is easy to develop and maintain.
Faster start-up
This is relative to a single microservice, and the service speed of starting a module is obviously much faster than starting an entire project with a single architecture.
Local modifications are easy to deploy
We found a problem in the development. If it is a single architecture, we need to re-release and start the whole project,which is very time-consuming, but microservices are different. Which module has a bug we only need to solve the bug of that module, after solving the bug, we only need to restart the service of this module, the deployment is relatively simple, do not have to restart the entire project, thus saving time.
The technology stack is not limited
For example, order microservices and movie microservices were originally written in java.Now we want to change the movie microservices to NodeJS technology,which is entirely possible,and because the focus is only on the logic of the movie, the cost of technology replacement will be much less.
Scaling on demand
We said above that monolithic architecture when you want to extend the performance of a module, you have to take into account whether the performance of other modules will be affected. For our microservices, it is not a problem at all.
High operation and maintenance requirements
For a single architecture, we only need to maintain this project, but for a microservice architecture, because the project is composed of multiple microservices, each module problem will cause the whole project to run abnormally, it is often not easy to know which module caused the problem, because we can not track the problem step by step through debug, which puts forward high requirements for the operation and maintenance personnel.
Distributed Complexity
For a single architecture, we can not use distributed, but for a microservice architecture, distributed is almost a necessary technology, due to the complexity of distributed itself, resulting in microservice architecture has become complex.
High cost of interface adjustment
For example, user microservices are to be called by order microservices and movie microservices. Once the interface of the user microservices changes greatly, then all the microservices that depend on it have to be adjusted accordingly. Since the microservices may be very large, the cost of adjusting the interface will be significantly increased.
Repetitive work
For a single architecture, if a business is used by multiple modules, we can abstract it into a tool class that is called directly by all modules,but microservices cannot do so, because the tool class of this microservice cannot be called directly by other microservices, so we have to build such a tool class on each microservice, resulting in duplication of code.
8. Microservices Development Framework
At present, the development framework of microservices, the most commonly used are the following four:
Spring Cloud:http://projects.spring.io/spring-cloud (Now very popular microservice architecture)
Dubbo:http://dubbo.io
Dropwizard: http://www.dropwizard.io (Focus on the development of individual microservices)
Consul, etcd&etc.(Modules for microservices)
9. The difference between Sprint cloud and Sprint boot
Spring Boot:
Designed to simplify the creation of product-level Spring applications and services, it simplifies configuration files, uses embedded web servers, contains many out-of-the-box microservices capabilities, and can be deployed jointly with spring cloud.
Spring Cloud:
The Microservice toolkit provides developers with development kits for distributed system configuration management, service discovery, circuit breakers, intelligent routing, micro-agent, control bus and so on.
Second, microservices practice
1. How do clients access these Microservices services?(API Gateway)
The traditional way of development,all services are local, the UI can be called directly, now split into independent services by function, running in a separate Java process that is generally on a separate virtual machine.How does the client UI access his?There are N services in the background,the front desk needs to remember to manage N services,a service offline / update / upgrade, the front desk will be redeployed, which obviously does not serve our split concept, especially when the current desk is a mobile application, usually the pace of business changes is faster.In addition, N small service calls are not a small network overhead.There are also general microservices within the system, usually stateless, and user login information and rights management is best to have a unified local maintenance management (OAuth).
Therefore, generally between the N services in the background and the UI will generally be a proxy or called API Gateway,his role includes
Provide a unified service portal for microservices to be transparent to the foreground
Aggregate back-end services to save traffic and improve performance
Provide security, filtering, flow control and other API management functions
In fact, I understand that this API Gateway can have a lot of generalized implementation, it can be a soft and hard box, it can be a simple MVC framework, or even a Node.The server side of js.Their most important role is to provide an aggregation of background services for the foreground (usually mobile applications), provide a unified service exit, and de-coupling between them, but API Gateway can also become a single point of failure or a performance bottleneck.
2. How do Microservices communicate?(Service calls)
Because all microservices are independent Java processes running on independent virtual machines, so the traffic between services is IPC (inter process communication), there have been many mature programs.Now there are two ways to basically the most versatile.In these ways, you can write a book in terms of expansion, and we are generally familiar with the details, and we do not expand the talk.
REST(JAX-RS,Spring Boot)
RPC(Thrift, Dubbo)
Asynchronous message calls(Kafka, Notify)
General synchronous call is relatively simple, consistency is strong,but easy to call problems, performance experience will be worse, especially when the call level is more.The comparison between RESTful and RPC is also a very interesting topic.General REST based on HTTP, easier to implement, easier to be accepted, the server implementation technology is more flexible,each language can support, at the same time across the client, there are no special requirements for the client, as long as the package of HTTP SDK can be called, so relatively wide use.RPC also has its own advantages, the transport protocol is more efficient,more secure and controllable, especially in a company, if there is a unified development specification and a unified service framework, his development efficiency advantages are more obvious.Look at the actual conditions of their technical accumulation, their own choice.
The asynchronous message mode has a particularly wide range of applications in distributed systems, he can reduce the coupling between the calling services, but also become a buffer between calls, to ensure that the backlog of messages will not flush the callee, while ensuring the caller’s service experience, continue to do their own work, will not be slow down by background performance.However, the cost is to weaken the consistency, the need to accept the final consistency of the data; there is a background service generally to achieve idempotence, because the message is sent for performance considerations will generally be repeated(to ensure that the message is received and received only once is a great test of performance); and finally, the need to introduce an independent broker,if there is no technical accumulation within the company, the broker distributed management is also a great challenge.
3. How do you find so many services?(Service Discovery)
In the microservice architecture, each service generally has multiple copies to do load balancing.A service may go offline at any time,or it may add new service nodes to temporary access pressure.How do services perceive each other?How is the service managed?This is the problem with service discovery.There are generally two types of practices, but also have advantages and disadvantages.Basically, it is through zookeeper and other similar technologies to do distributed management of service registration information.When the service goes live, the service provider registers its service information to ZK(or similar framework) and maintains a long link through a heartbeat, updating the link information in real time.Service callers address through ZK, according to customizable algorithms, find a service, you can also cache the service information locally to improve performance.When the service is offline, ZK will send a notification to the service client.
Client-side: The advantage is that the architecture is simple,the extension is flexible, and only depends on the service registrar.The disadvantage is that the client has to maintain the address of all the calling services, there is technical difficulty, and the general large companies have mature internal framework support,such as Dubbo.
Server side: The advantage is simple,all services are transparent to the front-end caller, and applications deployed on cloud services in small companies are generally used more.
4. What if the service hangs up in Microservices?
The biggest feature of distributed is that the network is unreliable.This risk can be reduced through microservice splitting, but without special guarantees, the outcome is definitely a nightmare.We have just encountered an online failure is a very humble SQL counting function, when the number of visits increases, resulting in high database load, affecting the performance of the application, thus affecting all the foreground applications that call this application service.So when our system is composed of a series of service call chains, we must ensure that any link problem does not affect the overall link.
There are many corresponding means:
Downgrade (local caching) these methods are basically clear and generic, not detailed.
For example, Netflix’s Hystrix:https://github.com/Netflix/Hystrix
5. Issues to consider for Microservices
Here’s a very good graph summarizing the issues to consider in microservice architecture, including
API Gateway
Inter-service calls
Service Discovery
Service Fault Tolerance
Service Deployment
Data calls
Third, microservices important components
1. Microservices Basic Capabilities
2. Service Registry
Services need to create a service discovery mechanism to help services perceive each other’s existence.When the service starts, it will register its own service information to the registry and subscribe to the services it needs to consume.
The service registry is the core of service discovery.It holds the network addresses (IPAddress and Port) of each of the available service instances.The service registry must have high availability and real-time updates.The Netflix Eureka mentioned above is a service registry.It provides a REST API for service registration and query service information.The service registers its own IPAddress and Port by using a POST request.Every 30 seconds, a PUT request is sent to refresh the registration information.Log off the service with a DELETE request.The client obtains the available service instance information through a GET request. Netflix achieves high availability is achieved by running multiple instances on Amazon EC2,with each Eureka service having an elastic IP Address.When the Eureka service starts, there is dynamic allocation of DNS servers.The Eureka client obtains the network address (IP Address and Port) of Eureka by querying DNS.In general, the Eureka server address is returned and the client is in the same availability zone. Others that can act as a service registry are:
etcd-highly available, distributed, strongly consistent, key-value, Kubernetes, and Cloud Foundry all use etcd.
consul-a tool for discovering and configuring.It provides an API that allows clients to register and discover services.Consul can perform a service health check to determine the availability of the service.
zookeeper — widely used in distributed applications, high-performance coordination services. Apache Zookeeper was originally a subproject of Hadoop,but is now a top-level project.
2.1 zookeeper service registration and discovery
In simple terms, zookeeper can act as a service Registry, allowing multiple service providers to form a cluster, allowing service consumers to obtain specific service access addresses (ip+ports) through the service registry to access specific service providers.As shown in the following figure:
Specifically, the zookeeper is a distributed file system, whenever a service provider after deployment to their services registered to The zookeeper of a way on the PATH: /{service}/{version}/{ip:port}, such as our HelloWorldService deployed to the two machines, then the zookeeper will create two entries recorded: were/HelloWorldService/1.0.0/100.19.20.01:16888 /HelloWorldService/1.0.0/100.19.20.02:16888。
zookeeper provides a “heartbeat detection” function, it will periodically send a request to each service provider(in fact, a long socket connection is established), if there is no response for a long time, the service center will think that the service provider has“hung up”, and cull it, for example, 100.19.20.02 If the machine is down, then the path on zookeeper will be only/HelloWorldService/1.0.0/100.19.20.01:16888.
The service consumer will listen to the corresponding path (/HelloWorldService/1.0.0), once the data on the path has a task change (increase or decrease), zookeeper will notify the service consumer service provider address list has changed, so as to update.
More importantly, zookeeper’s innate fault-tolerant and disaster-tolerant capabilities (such as leader elections) ensure high availability of the service registry.
3. Load Balancing
In order to ensure high availability, each microservice needs to deploy multiple service instances to provide services.At this point, the client performs load balancing of the service.
3.1 Common Strategies for Load Balancing
3.1.1 Random
The request from the network is randomly assigned to multiple servers in the internal.
3.1.2 Polling
Each request from the network, in turn assigned to the internal server, from 1 to N and then start over.This load balancing algorithm is suitable for servers within the server group have the same configuration and the average service request is relatively balanced.
3.1.3 Weighted Polling
According to the different processing power of the server, assign different weights to each server, so that it can accept the corresponding number of weights of the service request.For example: the weight of the server A is designed to be 1, the weight of B is 3, the weight of C is 6, the server A, B, C will receive 10%, 30%, 60% of the service request.This equalization algorithm can ensure that high-performance servers get more usage, to avoid low-performance servers overloaded.
3.1.4 IP Hash
This way by generating a hash value of the request source IP, and through this hash value to find the correct real server.This means that his corresponding server is always the same for the same host.In this way, you do not need to save any source IP.However, it is important to note that this approach may result in an unbalanced server load.
3.1.5 Minimum number of connections
The time spent on the server for each request of the client may vary greatly. With the lengthening of the working time, if a simple round robin or random balancing algorithm is used, the connection process on each server may vary greatly and does not achieve true load balancing.The minimum number of connections balancing algorithm has a data record for each server that needs to load internally, recording the number of connections currently being processed by the server. When there is a new service connection request, the current request will be assigned to the server with the least number of connections, so that the balance is more in line with the actual situation and the load is more balanced.This equalization algorithm is suitable for long-term processing of request services,such as FTP.
4. Fault tolerance
Fault tolerance, the understanding of the word, is to accommodate the error, do not let the error expand again, let the impact of the error within a fixed boundary,”a thousand miles of embankment destroyed in the nest ” The way we use fault tolerance is to make the nest do not grow large.Then our common downgrades, current limiting, fuses, timeout retry, etc. are fault-tolerant methods.
When calling a service cluster, if a microservice invokes exceptions, such as timeouts, connection exceptions, network exceptions, etc., the service fault tolerance is made according to the fault tolerance policy.Currently supported service fault tolerance policies have fast failure, failure switching.If it fails multiple times in a row, it fuses directly and no longer initiates the call.This prevents a service exception from draining all services that depend on him.
4.1 Fault Tolerance Policy
4.1.1 Fast Failure
The service only initiates a stand-by, and the failure immediately reports an error.Typically used for write operations that are not idempotent
4.1.2 Failover
The service initiates a call, and when a failure occurs, retry the other server.Usually used for read operations, but retry brings a longer delay.The number of retries can usually be set
4.1.3 Failure Security
Fail safe, when the service call has an exception, it is ignored directly.Typically used for operations such as writing logs.
4.1.4 Automatic recovery of failures
When an exception occurs in a service call, a failed request is logged and a regular retransmission is made.Typically used for message notifications.
4.1.5 forking Cluster
Multiple servers are called in parallel, and as long as there is one success, it is returned.Usually used for high real-time read operations.The maximum number of parallelism can be set by forks=n.
4.1.6 Broadcast Calls
The broadcast calls all providers, one by one, and any failure fails.It is typically used to notify all providers of updates to local resource information such as caches or logs.
5. Fusing
Fuse technology can be said to be a kind of”intelligent fault tolerance”, when the call meets the number of failures, the failure ratio will trigger the fuse to open, there is a program to automatically cut off the current RPC call, to prevent further expansion of the error.To achieve a fuse is mainly to consider three modes, off, open, half open.The transition of each state is shown below.
When we deal with exceptions, we have to decide how to handle them according to the specific business situation. For example, we call the commodity interface, the other party only temporarily does the downgrade process, then as a gateway call, we have to cut to the alternative service to perform or get the bottom data, and give user-friendly tips.There is also a need to distinguish the type of exception, such as the dependent service crashes, which may take a long time to solve.It may also be that the server load is temporarily too high, resulting in a timeout.As a fuse should be able to identify this type of exception, so as to adjust the fuse strategy according to the specific type of error.Added manual settings, in the case of failed service recovery time is uncertain, the administrator can manually force the switch fuse state.Finally, the fuse usage scenario is to call a remote service program or shared resource that may fail.If local private resources are cached locally, the use of fuses increases the overhead of the system.Also note that fuses cannot be used as an exception handling substitute for business logic in your application.
Some exceptions are stubborn, sudden, unpredictable, and difficult to recover, and can also lead to cascading failures (for example, suppose a service cluster load is very high, if a part of the cluster hangs up at this time, but also accounts for a large part of the resources, the entire cluster may suffer).If we continue to retry at this time, the result is mostly a failure.Therefore, at this time our application needs to immediately enter the failure state(fast-fail), and take the appropriate method for recovery.
We can use a state machine to implement CircuitBreaker, which has the following three states:
Closed: Circuit Breaker is closed by default, allowing the operation to be executed.CircuitBreaker internally records the number of recent failures, and if the corresponding operation fails, the number will continue once.CircuitBreaker transitions to the Open state if the number of failures( or the failure rate )reaches a threshold within a certain period of time.In the on state, Circuit Breaker enables a timeout timer that is set to give the cluster the appropriate time to recover from the failure.When the timer time comes, CircuitBreaker will switch to the Half-Open (Half-Open )state.
Open: In this state, the execution of the corresponding operation will fail immediately and an exception will be thrown immediately.
Half-Open:In this state, Circuit Breaker allows a certain number of operations to be performed.If all operations succeed, CircuitBreaker assumes that the failure has been restored,it transitions to a closed state, and resets the number of failures.If any of these operations fail, Circuit Breaker will assume that the fault still exists, so it will switch to the on state and turn the timer on again(giving the system some more time to recover from the failure)
6. Current limiting and downgrading
Ensure the stability of core services.In order to ensure the stability of the core service, with the increasing number of visits, you need to set a limit threshold for the number of services the system can handle, more than this threshold request is directly rejected.At the same time, in order to ensure the availability of core services, you can downgrade some non-core services,by limiting the maximum number of traffic to the service to limit the flow, through the management console for a single microservice manual downgrade.
7. SLA in Microservices
SLA: Short for Service-LevelAgreement, which means Service level Agreement. A contract between a network service provider and a customer that defines terms such as service type, quality of service, and customer payment.
A typical SLA includes the following items:
8. API Gateways
The gateway here refers to the API gateway, which means that all API calls are unified access to the API gateway layer,and there is a unified access and output of the gateway layer.The basic functions of a gateway are: unified access, security protection, protocol adaptation, traffic control, long-and long-link support, fault tolerance.With the gateway, each API service provider team can focus on their own business logic processing, while the API gateway is more focused on security, traffic, routing and other issues.
9. Multi-level caching
The simplest cache is to look up the database once and then write the data to the cache, such as redis, and set the expiration time.Because there is expiration, we should pay attention to the penetration rate of the cache.The penetration rate calculation formula,such as query method queryOrder(number of calls 1000/1s)inside the nested query DB method queryProductFromDb(number of calls 300/s), then the penetration rate of redis is 300/1000, in this way of using the cache, it is necessary to pay attention to the penetration rate, the penetration rate is large, indicating that the effect of the cache is not good.Another way to use the cache is to persist the cache,that is, do not set the expiration time, which will face a data update problem.In general, there are two ways, one is to use the timestamp, the query is based on redis by default, each time you set the data into a timestamp, each time you read the data with the current time of the system and the last set of this timestamp to do comparison, such as more than 5 minutes, then check the database again.This can ensure that there is always data in redis, which is generally a fault-tolerant method for DB.The other is to really let redis be used as a DB.The binlog, which subscribes to the database, pushes the data to the cache through the data heterogeneous system,and sets the cache to multi-level.You can use jvmcache as a first-level cache in the application, generally small size, access frequency is more suitable for this jvmcache mode, a set of redis as a second-level remote cache,in addition to the outermost three-level redis as a persistent cache.
10. Timeouts and retry
Timeout and retry mechanism is also a method of fault tolerance, where RPC calls occur, such as reading redis, db, mq, etc., because the network failure or the dependent service failure, can not return the result for a long time, it will lead to increased threads, increased cpu load, and even lead to an avalanche.So set the timeout for each RPC call.For the case of strong dependence on RPC call resources, there must be a retry mechanism, but the number of retry is recommended 1-2 times, in addition, if there is a retry, then the timeout time should be reduced accordingly, such as retry 1 time, then a total of 2 calls occur.If the timeout is configured for 2s, then the client will have to wait for 4s to return. Therefore, retry + timeout mode, the timeout time should be reduced.Here also talk about a PRC call time is consumed in which links, a normal call statistics of time including: ① call-side RPC framework execution time + ② network transmission time + ③Server-side RPC framework execution time + ④server-side business code time.The caller and the service side have their own performance monitoring,such as the caller tp99 is 500ms, the service side tp99 is 100ms, find the network group colleagues to confirm that the network is no problem.So where is the time spent? There are two reasons, the client caller,and one reason is that TCP retransmission occurs on the network.So pay attention to these two points.
11. Thread pool Isolation
In this aspect of resistance, when Servlet3 is asynchronous, thread isolation has been mentioned. The advantage between thread isolation is to prevent cascading failures or even avalanches.When the gateway calls N more than one interface service, we need to thread isolation for each interface.For example, we have to call orders, goods, users. Then the order of the business can not affect the processing of goods and user requests.If you do not do thread isolation, when the access order service network failure leads to delay, the thread backlog eventually leads to the full-service CPU load.That is, we say that all the services are not available, how many machines will be stuffed with requests at the moment. Then with thread isolation will make our gateway can ensure that local problems will not affect the global.
12. Downgrade and current limiting
The industry HAS A VERY MATURE APPROACH TO DOWNGRADE CURRENT LIMITING, SUCH AS FAILBACK MECHANISM, CURRENT LIMITING METHOD TOKEN BUCKET, DRAIN BUCKET, SEMAPHORE AND SO ON. Let’s talk about some of our experience here, the downgrade is generally achieved by the unified configuration center downgrade switch, then when there are many interfaces from the same provider, the provider’s system or the machine room network where there is a problem, we have to have a unified downgrade switch, otherwise, it will be an interface to downgrade. That is, to have a large knife on the type of business. There is the downgrade remember violence downgrade, what is the downgrade of violence, such as the forum function down, the results of the user show a large whiteboard, we want to achieve the cache of some data, that is, there is a bottom data.If the distributed current limit is realized, a common back-end storage service, such as redis, is required to read redis configuration information using lua on large nginx nodes.Our current limit is a stand-alone current limit, and did not implement distributed current limit.
13. Gateway Monitoring and Statistics
API gateway is a serial call,then every step of the occurrence of exceptions should be recorded, unified storage in a place such as elasticserach, to facilitate the subsequent analysis of the call exception.Given that the company’s docker applications are all unified distribution, and there are already 3 agents on docker before the distribution, it is no longer allowed to increase.We have implemented an agent program to collect the log output from the server, and then send it to the kafka cluster, and then consume it to the elasticserach, and query it through the web.Now do the tracking function is relatively simple,this piece also needs to continue to be rich.
]]>
Ensuring the consistency of business data in large enterprises is a very difficult problem. Generally speaking, customer or product-related data of multinational companies often come from multiple sources. This makes it difficult to answer even the simplest questions. In this case, data integration may be a solution.
Data integration provides organizations with a unified view of data stored in multiple data sources, and extraction, transformation, and loading (ETL) technology is an early attempt at data integration.
Using ETL, you can extract, transform, and load data from multiple source transaction systems to a single location, such as a company data warehouse. The extraction and loading part is relatively mechanical, but the conversion part is not so easy. To achieve this, you need to define business rules to explain which transformations are effective.
One major difference between ETL and data integration is that data integration is a broader field. It may also include data quality and the process of defining master reference data, such as defining customers, products, suppliers, and other key information related to the provision of business affairs within the company.

Let’s look at an example below. A large business company may need to categorize products and customers from several levels and expand marketing activities in segments. For the company’s smaller subsidiaries, this can be achieved through a simple product and customer classification hierarchy. In this example, a larger organization might classify a can of Coke as part of a carbonated drink, a beverage, and food, and beverage sales. However, smaller subsidiaries may include the same Coke in food and beverage sales without an intermediate classification. This is why classification consistency—or at least an understanding of differences—is needed to get a global view of the company’s overall sales.
Unfortunately, knowing who you are doing business with is not always easy. For example, Shell UK is a subsidiary of the oil giant Royal Dutch Shell. Companies like Aera Energy and Bonny Gas Transport are Shell entities, and some have other investors. Therefore, business transactions with these companies need to be added to Shell’s global view as customers, but from the company name, this relationship is not obvious.
The vice president of a well-known investment bank once told me that they don’t know how much business they have done globally, for example, Deutsche bank, let alone whether the company is profitable. The answers to these questions are buried in various global investment banks. Within the department’s system.
ETL technology is an early attempt to solve this problem. But to get the conversion steps correctly, you need to define business rules to determine what conversion is effective—for example, how to summarize sales transactions or map a database field. When “m” is used to define male customers, and “male” is used In another meaning. The development of technology is helpful to this process.
Facts have proved that the realization of integrated data is more extensive than ETL and data integration itself. Data quality is also an important factor. What if you find duplicate content in the customer or product files? In a project I participated in, 80% of customer records were duplicates. This means that the number of business customers of the company is only one-fifth of what it thinks.
In raw materials, the repetition rate of the master file is usually 20% to 30%. When a company overview needs to be summarized, these anomalies should be eliminated.
Although data integration has its advantages for large companies, it is not without challenges. Such as the continuous growth of unstructured data produced by companies.
Moreover, since data is stored in different formats-sensor data, weblogs, call logs, documents, images, and videos-ETL tools may be ineffective in this environment because they are not designed with these factors in mind. When there is a lot of data or big data, these tools will also encounter difficulties. Similar tools such as Apache Kafka try to solve this problem through real-time streaming data, which enables them to overcome the limitations of the previous message bus method on real-time data integration.
From the early ETL to the present, the related technologies and concepts of data integration have undergone great changes. But it still needs to continue to evolve to keep up with the constantly changing needs of enterprises and the new challenges emerging in the era of big data.
]]>
To this end, H&M plans to adjust the way it uses information technology to improve business performance. Their goal is to win customers’ favour again through artificial intelligence and big data. As for the specific strategy, they hope to use big data to plan apparel categories sold in different stores, instead of providing the same supply of goods to all stores around the world.
H&M’s business strategy adjustment
At present, H&M has convened 200 data scientists, hoping to understand the sales model and trend of each product in each store. H&M hopes that by investing in big data technology and combining customer demand sorting at the local level, it will help it increase its own revenue while gaining the trust of stakeholders.
To achieve this goal, H&M’s management team needs to find a new way to create value for customers. They began to shift their focus to exploring market opportunities and developed corresponding solutions to seize these new opportunities. After research, managers and team members found that big data seemed to be a solution with great potential for success.
Like most traditional brick-and-mortar retail companies, H&M originally analyzed the preferences of consumers through a team of designers, and then developed products that catered to buyers’ tastes. But in fact, this model is not successful. To this end, H&M began using algorithms to analyze store revenue, returns, and membership card data.
H&M’s new strategy no longer emphasizes universal apparel and store design, but tailors products according to local needs. Through analysis, H&M found that personalization and high-quality experience have become the only magic weapon to attract customers. Customers also hope to see high-quality materials and more fashionable design elements in apparel products.
It is worth mentioning that H&M does not intend to reduce the merchandise sales team, but to equip them with advanced tools and technologies to make more informed decisions. The company hopes big data can help H&
Verification results:
In Ostmalm, Stockholm, there is an H&M store that always sells basic clothing for all ages and genders. In the early stage of technology trial, this store found that most of the customers who come to shop through big data and artificial intelligence are women, and they prefer fashion clothes such as flower skirts.
In addition, through behavior analysis, H&M found that shoppers are more willing to choose higher-priced products. To this end, they began to display a $118 leather bag and a $107 cashmere sweater next to a $6 T-shirt and $12 shorts. In addition, they also added a coffee shop and sold flowers at the same time, because the data shows that customers want to rest in the place when shopping, or buy bouquets and clothing as gifts at the same time.
By analyzing the customer’s purchase and return records, this store has obtained a wealth of behavior data and favorite product types in the core market. H&M said that after the strategy adjustment, the store’s sales ushered in a significant increase, and now the products and experiences they provide are more in line with local preferences.
At the same time, H&M also uses big data to forecast sales trends three to eight months in advance. In addition to collecting information provided by external sources, they also collect data through 5 billion visits to stores and online websites. After analyzing the entire network data (including blog posts, search engines, etc.), the H&M team was able to understand the fashion trends and changing trends, thereby producing new items that are expected to become popular models.
More importantly, the retailer also uses algorithms to understand currency exchange rate fluctuations and raw material costs. In this way, they can ensure that the goods are correctly priced in each store. In fact, other competitors are also using similar technologies to win customers’ favor. For example, Zara is using robots to speed up online order taking, and GAP relies on market research data and Google Analytics to understand customer preferences.
Summary: Empower employees with the power of data and AI
In general, H&M has invested a lot of resources in information technology, the main purpose is to help employees really use data-not intuition-to make judgments. These algorithms operate around the clock and are constantly adjusted to suit customer behavior and expectations. With the help of artificial intelligence and big data, our decisions are no longer affected by human emotions. H&M believes that this is a positive signal and is expected to break through the limitations of human capabilities in the field of decision-making.
H&M realizes that through big data and behavior analysis technology, companies will provide employees with the most reliable and most relevant information, and use this to promote the development of the company. The company believes that this is the only way to fully match product design with customer needs and ultimately increase operating value. Now, the maturity and popularization of big data technology is giving H&M a clearer vision, a more accurate positioning, and a stronger customer understanding.
As the most dazzling match in the Bundesliga, the match between Dortmund and Bayern Munich has completely exceeded the scope of sports events. For most people, this “national derby” may become the only Bundesliga event they pay attention to each year.
This year, the two most successful teams in German modern football have brought new meaning to the duel, and are even expected to attract a wider viewing group than in previous years.
The Bundesliga is the only football league in Europe that has been restored after the mandatory isolation of the Coronavirus epidemic. However, considering the rapid spread of the COVID-19 virus, the game can only be played in an empty stadium where fans cannot attend.
So far, the Bundesliga has safely completed two rounds of competition. Due to the suspension of various sports events, spectators who are bored can only choose to watch football games, and the attention of the International Sports Federation to the Bundesliga events has also begun to rise.
But as other leagues resume their games in the next few weeks, the Bundesliga’s monopoly will soon be challenged. In order to maintain the current advantage, the Bundesliga organizers hope to make full use of this rare window of opportunity.
Digital plan
At present, the technical solution has become a confrontation weapon between the major European leagues. The organizers hope to use this to expand their business scope, cross borders and open up new markets. People combine digital services with application products to help fans who are thousands of miles away from the arena, and who have never even had a live viewing experience, bring an “immersive” experience.
Data-driven insights are increasingly seen as an effective way to increase audience participation and improve the quality of broadcast programs.
To this end, the Bundesliga cooperated with Amazon Web Services (AWS) to organize exciting moments during the live game to play the “Match Facts” real-time program. In this partnership, AWS will provide cloud infrastructure and artificial intelligence (AI) tools to track the number of TV viewers and produce tailored content for fans on the online platform. Considering the huge influence of young audiences and mobile/social media in developing markets, this kind of content customization is indeed very meaningful.
This cooperation agreement was signed in January this year. Attentive viewers may have discovered that the AWS logo has begun to appear on the game screen and the overlay analysis interface. The long-term plan for the Bundesliga is to collect 3.6 million data points and 10,000 event archives based on each game, and create a set of advanced statistical platforms accordingly.
Data-driven functions
The first phase of “Match Facts” launched with the support of AWS chose this year’s national Derby as the stage to debut. This time, the cloud giant brought two unprecedented innovations to the audience-the average formation and expected goals (xGoals, referred to as xG).
In terms of average formation, AWS can track the player’s position data in real-time, providing fans with insights on the changing trends on the field, helping the audience understand that the team is currently more offensive or defensive, how to determine tactical changes, and analyze the replacement players will be on the court. What changes it brings to the game.
Over time, the audience can even slowly understand how a team can change its skills and tactics based on available players. For example, fans may ask a substitute to play as soon as possible because he can play more threatening and aggressive combinations on the left side while he is on the court. The plan of the Bundesliga organizers is that the higher the participation of fans in the event, the greater the possibility of attracting new fans and retaining old fans.
As a second innovation, the expected goal represents a statistical model that predicts the likelihood of the team scoring a goal in a specific area on the court based on real-time and historical information. In traditional analysis, the duration of ball control, effective shots and field goals are regarded as the highest judgment indicators to measure the team’s level.
But what are the specific differences between these indicators? Obviously, it is more difficult to score from a distanceless shot outside the penalty area than to approach the goal with a light nudge. It is expected that the target model will analyze these data and provide an xG score (out of 1) for each goal opportunity. By summarizing this data, each team and broadcaster can continue to analyze the results of the game before, during, and after the game.
In the UK, the expected target model has been enthusiastically supported by many journalists who recognize the value of data. However, many conservative voices pointed out that the emergence of such indicators means that football is abandoning traditional analytical ideas and falling into the quagmire of excessive analysis. Despite some objections, in the past few seasons, the broadcast program will release xG statistics at the end of each game and has been widely recognized by the audience.
Make expectations a reality
In the Bundesliga, the expected target model is also used to generate the “goal rate” percentage. Amazon’s SageMaker platform will host a variety of machine learning models for real-time calculation of xG results through data such as player position, distance from the goal, angle to the goal, current running speed of the player, number of goalkeepers and defensive players.
The audience can also understand the specific role played by certain actions (such as passing the ball) in a successful goal. This will increase the fans’ understanding of football, and also allow fans who already know the rules of football to be appreciated by others.
Codeless / low-code programming came started gaining market currency around 2013-14. No code / low code is a way to create applications that allows developers to quickly develop applications with minimal coding knowledge. Developers assemble and configure applications through a graphical interface and visual modeling. In this way, the developer skips all the middle layer directly, the visual code block already contains 90% of the functions required by most applications, and the developer only focuses on the remaining 10% of the code logic.
As a result, some developers inevitably have a new sense of crisis: With the advent of codeless / low-code programming, are programmers going to lose their jobs? So what are we talking about when we talk about codeless / low-code programming?
At the beginning, everyone may think that the low-code development platform is similar to an IDE and integrates some tools to improve research and development efficiency. In fact, low-code platforms provide more capabilities than IDEs. Low-code development turns programming into “building blocks” and modularizes common code. Developers can drag and drop through graphical interfaces to complete application development.
This Saves time for developers to write code by hand and control application construction flexibly. In this way, developers can complete application development with very little code. Low-code platforms can not only integrate software development into other fields, but also allow enterprises in other fields to enter software development, accelerating the digital transformation of enterprises.
Low code has the following advantages:
First, quickly complete steps from demand to application . Developers can build applications for multiple platforms at the same time. Demos can be completed within days or even hours, saving development costs.
Second, reduce the complexity of research and development, and reduce the difficulty of building large-scale systems. The low-code platform framework itself handles a certain complexity, and has built-in security processes, data integration, and support for cross-platform, reducing the need for developers to manually write code repeatedly. Developers can focus on the implementation of key business logic.
Third, the low-code platform integrates the mainstream architecture, which helps accomplish rapid deployment , and also accomplish secondary software development configuration and multiple configuration development.
As early as 1982, in James Martin’s published paper “APP Development Without Program”, he proposed the idea of building applications without writing programs. Today, many IT companies are competing for the low-code market, making the above assumptions possible: such as Jinyun, a strategic investment in China, H3 BPM launched by Orzhe in 2010, and Nine Chapters Fully Collaborative Cloud with Cloud, and Google Apps abroad Companies such as Maker, Microsoft’s Power Platform, Mendix, Salesforce, etc., have laid out the low-code market.
According to a Forrester Research report, the low-code development platform market will reach $ 15.5 billion by 2020 , showing that the low-code development market is hot. In another report by Forrester Research, about 100 vendors are grabbing the market, of which Microsoft ranked first in the statistics of “who is your low-code vendor” in 2018 and 2019.
Every stage of the communications revolution we have experienced over the past 50 years has benefited from the advancement of the Technology whether it is internet, cloud, networks or web. This also applies to the Technological revolution currently under way in for of 5G rollouts and 5G testing around the world.. The modern technological advancements like Internet of Things, consumerization or digitalization, and artificial intelligence(AI) all rely on fast, low-cost, and reliable networks.
Every day, more and more devices are actively communicating with each other, and simple upgrades to the current network are no longer sufficient to support the growing traffic and sophistication. 4G networks has reached the technical limit of how much data can be transmitted especially due to consumption boost in internet based services like gaming, video, remote working etc.
This is where 5G comes into picture. 5G offers boost in possible solutions to address market needs and a range of new opportunities for telecom, gaming, network, software, SaaS, IT infrastructure companies but this transition to 5G will be expensive. Telecom operators who choose to build new small or large communication units will face sharp increase in network costs for 5G rollouts or upgrades. Failure to move to 5G will result in cannibalisation of unsuccessful players in telecom.
Network sharing has become a standard part of the telecom operator’s operating model, and the huge expansion of the network infrastructure required to successfully launch 5G will accelerate this trend. There are already different ways to implement network sharing, and each method has its own impact on the business model of the operator.
What is 5G? 5G is the successor to 4G. 5G uses higher radio frequencies and provides higher speeds, lower latency, and finer coverage.
One advantage of using high-frequency radio waves is that more equipment can be used in the same area. Although 4G can support up to 4,000 devices per square kilometer, 5G can support up to 400,000 devices. This opens up a whole new world for the Internet of Things, where device density in some regions can be very high.
One disadvantage of using high-frequency radio waves is that they cannot travel as far as the low-frequency radio waves used by 4G. Therefore, 5G networks introduce a new technology called mMIMO (Massive Multiple Input Multiple Output), which uses a large number of antennas. In addition to supporting the required additional antennas, mMIMO also provides the ability to send and receive multiple data signals at once. This makes it possible to use target data streams to follow users and track them. Therefore, mMIMO can serve more users and devices at the same time in a smaller area, while maintaining fast data rates and consistent performance.
Unlike previous generations of mobile networks, 5G will not require a single operator to provide nationwide excess services through its own infrastructure. In fact, network sharing is ideal for 5G. As a result, telcos are pushing for network sharing and software-defined networking as “native” components of the solution. Although the transition from “private” network infrastructure to “shared” network infrastructure will require network operators to adopt a new way of working, this will also increase the speed of 5G deployment and reduce the cost of promotion , And eliminate visual pollution by reducing the number of outdoor antennas required.
5G will help various industries in improving their services and cater to customers and citizens’ needs in better way
1) Governments will be able to control and direct traffic and public transportation in a better way with better use of sensors and networks and interconnected devices in Intelligent Internet of things.
2) Automotive- With advent of autonomous and semi autonomous vehicles low latency computers in Automotive vehicles will be better able to communicate their current state and their sensors will be able to detect and process presence of nearby obstructions, traffic lights, vehicles etc in more better way for decisions.
3) Retail- One of the biggest issue facing retail and consumer goods companies since ages is realtime availability of information of inventory for replenishment specially during natural or unnatural disasters which are growing day by day. And first time in history of networks and analytics this feature is going to be available due to 5G. When we say countries like china are truly moving towards cashless society , reason behind that is Chinese consumers are able to do smallest transactions digitally. In retail cashless checkout is a norm in China. These services will be available in cheaper and more cost affective way due to 5g.
4) Manufacturing :- One thing any technology practitioner is going to tell you about manufacturing in coming decades is that factory floors , warehouses , supply chain are going to be completely transformed due to convergence of 5G, AI, IoT and Robotics. 5G will help manufacturers control and analyse industrial processes with unprecedented degree of precision and accuracy.
to sum up
After 20 years of continuous development and evolution, the network has become the core technology of our continuously connected world. The availability of standards-compliant, compatible networks has changed the way people expect to communicate with each other and the way information is shared today. And 5G enables the network to keep up with the future needs and expectations of consumers and the industry.
The investment required to deploy 5G is considerably high. To maximize the return on investment, a clear vision and a clear strategy are required. In addition, the key to success is cooperation between all stakeholders and the transition to network sharing. And these are what we must prepare immediately.
In this article, we will discuss whether edge computing will kill cloud computing. In addition, you will learn the advantages and disadvantages of each technology. Let’s delve into the future of edge computing and cloud computing to discover their perspectives and their impact on the IT industry.
The implementation of this technology can solve some of the current key issues. Data transfer from the edge device to the server for processing takes a lot of time. Although this delay does not seem severe, every millisecond is important. In addition, it is necessary to reduce the pressure on the bandwidth. In another case, high traffic and long distances can significantly reduce network speed. Unfortunately, cloud computing cannot currently solve these problems.
This is why there is no doubt that edge computing has more advantages than cloud computing. First, you can use it to process time-sensitive data. This technology relocates critical data processing to the edge of the network, thereby reducing information processing delays. Because of this feature, edge computing is ideal for applications that require immediate response. This will make them more robust and load faster.
In addition, edge computing may be more popular than cloud computing because it ensures lower management costs. Sending the most important data over short distances can save a lot of network and computing resources.
Similarly, if various smart devices use edge computing instead of cloud computing, there may be many benefits. The technology will guarantee an immediate response and provide the opportunity to perform accurate, fast calculations. This feature is particularly useful for the development of autonomous vehicles.
Moreover, this technology offers new opportunities for all large content providers such as Netflix or Amazon. These companies are interested in the development of streaming technology, and edge computing has become the preferred solution. This method allows users to access their favorite programs faster. In addition, these companies have the opportunity to expand their services without sacrificing current performance.
Unlike edge computing, if you use cloud computing, all data will be processed and stored in a remote data center or server. Any device or application that needs to access this information must be connected to the cloud.
Still, this technology can bring many benefits to companies using standard server networks. With the use of cloud computing, such organizations can ensure that only authorized users can access stored data.
According to Cisco’s global research, both humans and machines are expected to generate about 800 ZB of information by 2020. There is no doubt that only cloud servers can store such a large amount of data.
In addition, research shows that especially Internet of Things (IoT) devices will produce more than 80 megabytes of data. Unfortunately, much of the information will be deleted because the device does not have enough storage space to hold the received data.
Although the main drawback of cloud computing is its lack of speed, the technology provides amazing processing power and storage space. These features will undoubtedly continue to be useful for small business owners and those who do not perform time-critical tasks.
Keep in mind that it may sometimes be necessary to use both techniques for better results. The combination of edge computing and cloud computing can provide you with the opportunity to maximize its potential while reducing its disadvantages.
Following this approach will open up new horizons especially for IT and AI companies. IoT devices will be able to run faster and process data efficiently without losing storage capacity and processing power. Of course, edge computing now seems to have more advantages than cloud computing, but you should not underestimate the advantages of the latter.
Today, the future of the network seems to be somewhere between edge computing and cloud computing. There is no doubt that leading companies will find a way to get more out of these technologies and overcome their disadvantages. It will open up possibilities for improving many services and driving innovation.
At the same time, some users predict the end of cloud computing. However, many experts believe in the future of this technology. Also, for now, no analytics framework can prove that cloud computing has become less popular.
Although edge computing provides solutions to many challenges, cloud computing remains an important part of the IT industry. Today, these two technologies are still important and provide data analysis solutions for a variety of organizations.
Edge computing and cloud computing are different technologies that cannot be replaced with each other. Each of them is important to the IT industry and allows different methods to be used to achieve greater returns. But today, many companies are turning to edge technology because it offers more opportunities than cloud computing.
The decision on one of these technologies depends on the company’s goals and needs. Try to find out if edge computing can help you overcome existing technical challenges and make sure it can serve your business.
Remember that technology is constantly changing and companies must adapt to new trends. However, you should always choose only those solutions that help improve your business. In addition, you can always choose to use both edge and cloud computing to achieve your company’s goals.