astra-sites domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131cookie-law-info domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131essential-blocks domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131rank-math domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131wp-bulk-delete domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131insert-headers-and-footers domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131add-search-to-menu domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131ultimate-blocks domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131essential-addons-for-elementor-lite domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131jetpack domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131loginizer domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131rank-math domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131ultimate-addons-for-gutenberg domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131wpforms-lite domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131email-subscribers domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131shortpixel-adaptive-images domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131shortpixel-image-optimiser domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131astra domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131astra-addon domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/marayylx/techpomelo.com/wp-includes/functions.php on line 6131A SWOT analysis is a tool used to evaluate a company’s strengths, weaknesses, opportunities, and threats. Here is a SWOT analysis of Apple:

Overall, Apple’s strengths include its strong brand recognition, wide range of products and services, strong financial position, and innovation. However, the company also faces significant challenges, such as high product prices, dependence on key products, and limited market share in certain segments. Apple’s opportunities include expansion into new markets, development of new products and services, and strategic partnerships and acquisitions. The company’s threats include intense competition, changes in consumer preferences, economic downturns, and government regulations and legal issues.
Apple is a company known for its innovative products and design. The company has been able to consistently release successful and popular products, such as the iPhone and iPad, through its various innovation strategies.
One of the key strategies that Apple employs is a focus on design and user experience. The company places a strong emphasis on creating products that are not only functional but also aesthetically pleasing. This is evident in the sleek and minimalist design of the iPhone and iPad, which have become iconic symbols of Apple’s brand.
Another important aspect of Apple’s innovation strategy is its use of cutting-edge technology. Apple is always on the lookout for new and emerging technologies that it can incorporate into its products. For example, the company was one of the first to adopt OLED displays in its iPhones, which improved the overall visual quality of the device. Additionally, the company has implemented features such as facial recognition and augmented reality in its products, further differentiating it from its competitors.
Apple is also known for its efforts to enter new markets and product categories. The company has been successful in expanding its product line to include new offerings, such as the Apple Watch and the HomePod. These new products have allowed the company to tap into new revenue streams and reach new customer segments.
In addition to these strategies, Apple also places a strong emphasis on secrecy and maintaining control over its supply chain. The company is known for keeping its product development processes and plans tightly under wraps, which helps to maintain an element of surprise and exclusivity around its product releases. This also allows Apple to control the production and distribution of its products, which helps to ensure a consistent level of quality across all of its offerings.
Overall, Apple’s innovation strategies have been key to the company’s success. By focusing on design, utilizing cutting-edge technology, and expanding into new markets and product categories, Apple has been able to create products that are highly desirable and sought-after by consumers. Additionally, the company’s emphasis on secrecy and supply chain control has helped to maintain its competitive edge and position as a leader in the tech industry.
However, it’s important to note that Apple’s innovation strategy is not without criticism. Some critics have pointed out that the company’s focus on secrecy can be detrimental to its relationship with developers and other partners. Additionally, the company’s control over its supply chain has raised concerns about labor practices and human rights.
In conclusion, Apple’s innovation strategies have been key to the company’s success in the tech industry. The company’s focus on design and user experience, use of cutting-edge technology, and efforts to expand into new markets and product categories have helped it to create highly desirable products that are sought-after by consumers. Additionally, the company’s emphasis on secrecy and supply chain control has helped to maintain its competitive edge. However, it’s important to consider the criticism that these strategies have generated.
Apple Glass is a rumored product that is expected to be a pair of augmented reality (AR) glasses developed by Apple. The product is expected to be a blend of virtual and augmented reality, allowing users to interact with digital content in the real world.
According to various reports, Apple Glass is expected to feature advanced technology such as voice recognition and gesture control, allowing for hands-free navigation of the device. It is also expected to have a built-in camera and microphone, allowing for video recording and calling. The device will be connected to the internet, and it’s expected that users will be able to access the internet and their apps through the glasses.
Apple Glass is expected to be integrated with Apple’s ecosystem of products and services, such as the iPhone, iPad, and Apple Watch, allowing for seamless integration and cross-device functionality. It’s also expected that the device will be able to connect to other smart devices such as home appliances, cars and other wearables.
The device is expected to be focused on both consumer and enterprise use cases, providing a wide range of features and capabilities that can be used by various industries such as healthcare, education, and manufacturing. The device could also be used in areas such as logistics, retail, and field services, providing a new way to interact with digital content and data.
One of the main advantages of Apple Glass is the ability to provide information and data in real-time, which would be beneficial for professionals in various fields, such as doctors and mechanics. This could also be beneficial for consumers by providing them with information and notifications on the go, without having to take out their phones.
In conclusion, Apple Glass is a highly-anticipated product that is expected to bring new possibilities to the field of augmented reality. The device is expected to feature advanced technology, seamless integration with other Apple products and services, and a wide range of features and capabilities that can be used by various industries. The device is expected to change the way we interact with digital content and data, providing new opportunities for both consumers and businesses. However, it’s important to note that Apple has not officially announced this product and the specifications and features are based on rumors and speculations.
Apple Silicon is a term used to describe the custom-designed processors that Apple uses in its computers and mobile devices. These processors are based on the ARM architecture, which is different from the x86 architecture used by most personal computers.
The use of Apple Silicon allows Apple to have more control over the performance and power efficiency of its devices, as well as enabling new features and capabilities. The company began transitioning to Apple Silicon in 2020 with the release of the M1 chip, which is used in the MacBook Air, MacBook Pro, and Mac Mini.
One of the main advantages of Apple Silicon is its performance and power efficiency, which allows for longer battery life and faster performance. This also allows Apple to create smaller and lighter devices, as well as reduce the need for fans and cooling systems. Additionally, Apple Silicon also allows for more seamless integration of hardware and software, which can improve the overall user experience.
Another advantage of Apple Silicon is its support for iOS apps, which allows users to run iPhone and iPad apps on their Macs. This allows for a wider range of apps to be available on Macs and can make it easier for developers to create apps for both iOS and macOS.
In conclusion, Apple Silicon is a custom-designed processor based on the ARM architecture that is used in Apple’s computers and mobile devices. The use of Apple Silicon allows for better performance, power efficiency, and integration of hardware and software, as well as support for iOS apps on Macs
Apple HomeKit is a framework that allows developers to create apps and devices that can be controlled by iOS devices such as the iPhone and iPad. It allows users to control and automate a wide range of home devices such as lights, thermostats, cameras, door locks, and more through the Home app or by using Siri voice commands.
HomeKit uses a secure communication protocol that encrypts data between devices and requires users to set up a unique HomeKit code or use Touch ID or Face ID to grant access to their home. This ensures that only authorized users can access and control the devices in their homes.
AirPlay is a wireless protocol that allows users to stream audio, video, and photos from their iOS devices, Macs, and Apple TVs to other AirPlay-enabled devices such as speakers and TVs. This allows users to play music, watch movies, and view photos on other devices without the need for cables or physical connections.
AirPlay uses a peer-to-peer connection between devices, which means that the devices can communicate directly with each other without the need for a central hub or router. This allows for a faster and more stable connection and reduces the amount of data that needs to be sent over the internet. The protocol also uses encryption to ensure that the data being transmitted is secure.
In conclusion, Apple HomeKit is a framework that allows developers to create apps and devices that can be controlled by iOS devices and AirPlay is a wireless protocol that allows users to stream audio, video, and photos from their iOS devices, Macs, and Apple TVs to other AirPlay-enabled devices. Both of these technologies are designed to make it easy for users to control and interact with their devices, and both use encryption to ensure that the data being transmitted is secure.
]]>First, the introduction of microservices
1. What are Microservices
In the introduction of microservices, we must first understand what microservices are. As the name suggests, microservices have to be understood from two aspects,what is “micro” and what is “service”. In the narrow sense, the small and famous”2 pizza team” is a good interpretation of this explanation (the 2 pizza team was first proposed by Amazon CEO Bezos, meaning that the design of a single service, all participants from the design, development, testing, operation and maintenance owners add up to only 2 pizzas). The so-called service, must be different from the system, service one or a set of relatively small and independent functional units, is the user can perceive the minimum set of functions.
2. Origin of Microservices
First proposed by Martin Fowler and James Lewis in 2014, microservices architecture style is a way to develop a single application using a set of small services, each running in its own process and communicating using lightweight mechanisms, usually HTTP APIs, that are built on business capabilities and can be deployed independently through automated deployment mechanisms, implemented in different programming languages, and different data storage technologies, with minimal centralized management.
3. Why do you need microservices?
In the traditional IT industry, most of the software is piling up a variety of independent systems, the problem of these systems is summed up as poor scalability, reliability is not high, high maintenance costs. However, since SOA used bus mode in the early days, this bus mode is strongly bound to a certain technology stack, such as: J2EE. This results in many enterprises ‘ legacy systems are difficult to connect, switching time is too long, the cost is too high, the convergence of the stability of the new system also takes some time.In the end, SOA looks beautiful,but it has become an enterprise-class luxury that small and medium-sized companies are afraid of.
3.1 Problems caused by the monolithic architecture
The single architecture works well in the case of a relatively small scale, but with the expansion of the scale of the system, it exposes more and more problems, mainly the following points:
1.Complexity gets higher
For example, some projects have hundreds of thousands of lines of code, the difference between the various modules is more vague, the logic is more confusing, the more code complexity, the more difficult to solve the problem encountered.
2.Technological debt is rising
The company’s personnel flow is a normal thing, some employees before leaving, neglect the quality of the self-control, resulting in leaving a lot of errors, due to the huge amount of project code, a error is difficult to find, which brings great trouble to the new employees, the greater the turnover of personnel left more errors, which is the so-called technical debt more and more.
3.Deployment slows down gradually
This is very well understood, the single architecture module is very large, the amount of code is very large, resulting in the deployment of the project takes more and more time, once some projects start to take 10 minutes, what a terrible thing ah, start a few projects a day will pass, leaving developers very little time to develop.
4.Hindering technological innovation
For example, a previous project was written using struts2, due to the inextricably linked between the various modules, the amount of code, the logic is not clear enough, if you want to use spring mvc to refactor this project will be very difficult, the cost will be very large, so more often companies have to bite the bullet to continue to use the old struts architecture, which hinders the innovation of technology.
5.Cannot scale on demand
For example, the movie module is a CPU-intensive module,and the order module is IO-intensive module, if we want to improve the performance of the order module, such as increasing memory, increasing hard disk, but because all modules are in one architecture, so we have to consider other module factors when expanding the performance of the order module, because we can not expand the performance of a module and damage the performance of other modules, and thus can not scale on demand.
3.2 Differences between Microservices and Monolithic Architectures
Each module of microservices is equivalent to a separate project. The amount of code is significantly reduced, and the problem is relatively easy to solve.
Single architecture All modules share a database, the storage mode is relatively single, microservices each module can use a different storage mode (for example, some use redis, some use mysql, etc.), the database is also a single module corresponding to its own database.
Monolithic architecture All module development uses the same technology, microservices each module can use a different development technology, the development mode is more flexible.
3.3 Microservices and SOA Differences
Microservices, in the essence, are SOA architectures.In a microservice system, there can be services written in Java or services written in Python. They are unified into a system by Restful architectural style.So the microservices themselves have nothing to do with the specific technology implementation, and are highly scalable.
4. The Nature of Microservices
Microservices, the key is not just the microservices themselves,but the system should provide a set of basic architecture, which allows microservices to be deployed, run, and upgraded independently. Not only that, the system architecture also allows microservices and microservices to be structurally “loosely coupled”, and functionally expressed as a unified whole.This so-called“unified whole”shows a unified style of interface, unified rights management, unified security policy, unified on-line process, unified log and audit methods, unified scheduling, unified access entry and so on.
The purpose of microservices is to effectively split applications for agile development and deployment.
Microservices promote the idea that inter-team should be inter-operate, not integrate .inter-operate is to define the boundaries and interfaces of the system. In a team full stack, let the team be autonomous, because if the team is formed in such a way, the cost of communication within the system will be maintained, each subsystem will be more cohesive, each other’s dependent coupling can become weak, cross-system communication costs can be reduced.
5. What kind of project is suitable for microservices
Microservices can be divided according to the independence of the business function itself, if the system provides services that are very low-level, such as: operating system kernel, storage system, network system, database system, etc., such systems are low-level, there is a close relationship between functions and functions, if forced to split into smaller service units, will make the integration workload rise sharply, and this artificial cutting can not bring real isolation on the business, so can not be deployed and run independently, it is not suitable for making microservices.
Whether you can make a microservice depends on four elements:
6. Microservice Folding and Design
Moving from a monolithic structure to a microservice architecture will continue to encounter the problem of service boundary division: for example, we have a user service to provide the basic information of the user,so should the user’s avatar and picture, etc. be divided into a new service is better or should it be merged into the user service?If the granularity of the service is too coarse, it is back to the old way of monolithic; if it is too fine, the overhead of inter-service calls becomes negligible, and the difficulty of management increases exponentially.So far, there is no standard that can be called service boundary division, which can only be adjusted according to different business systems
The big principle of splitting is that when a business does not depend on or rarely depends on other services,has independent business semantics, provides data for more than 2 other services or clients, then it should be split into a separate service module.

6.1 Microservice Design Principles
Principle of Single Responsibility
It means that each microservice only needs to implement its own business logic on it, such as the order management module, it only needs to process the business logic of the order on it, and the rest does not need to be considered.
Principles of Service Autonomy
It means that each microservice is independent from development, testing, operation and maintenance, etc., including the stored database is also independent, there is a complete process, we can treat it as a project.Do not have to rely on other modules.
Lightweight Communication Principles
The first is that the language of communication is very lightweight, second, the communication mode needs to be cross-language, cross-platform, cross-language is to make each microservice has enough independence, can not be controlled by technology.
Clear principles of interfaces
Since there may be invocation relationships between microservices, in order to try to avoid future adjustments due to changes in the interface of a microservice, it is necessary to take into account all situations at the beginning of the design, so that the interface is as common and flexible as possible, so as to avoid other modules also making adjustments.
Each microservice can run independently in its own process;
A series of independently running microservices work together to build the entire system;
Each service is developed as a separate business, and a microservice generally completes a specific function,such as: order management, user management, etc;
Microservices communicate through lightweight communication mechanisms,such as calls via REST APIs or RPC.
Easy to develop and maintain
Since a single module of microservices is equivalent to a project, the development of this module we only need to care about the logic of this module, the amount of code and logical complexity will be reduced, so that it is easy to develop and maintain.
Faster start-up
This is relative to a single microservice, and the service speed of starting a module is obviously much faster than starting an entire project with a single architecture.
Local modifications are easy to deploy
We found a problem in the development. If it is a single architecture, we need to re-release and start the whole project,which is very time-consuming, but microservices are different. Which module has a bug we only need to solve the bug of that module, after solving the bug, we only need to restart the service of this module, the deployment is relatively simple, do not have to restart the entire project, thus saving time.
The technology stack is not limited
For example, order microservices and movie microservices were originally written in java.Now we want to change the movie microservices to NodeJS technology,which is entirely possible,and because the focus is only on the logic of the movie, the cost of technology replacement will be much less.
Scaling on demand
We said above that monolithic architecture when you want to extend the performance of a module, you have to take into account whether the performance of other modules will be affected. For our microservices, it is not a problem at all.
High operation and maintenance requirements
For a single architecture, we only need to maintain this project, but for a microservice architecture, because the project is composed of multiple microservices, each module problem will cause the whole project to run abnormally, it is often not easy to know which module caused the problem, because we can not track the problem step by step through debug, which puts forward high requirements for the operation and maintenance personnel.
Distributed Complexity
For a single architecture, we can not use distributed, but for a microservice architecture, distributed is almost a necessary technology, due to the complexity of distributed itself, resulting in microservice architecture has become complex.
High cost of interface adjustment
For example, user microservices are to be called by order microservices and movie microservices. Once the interface of the user microservices changes greatly, then all the microservices that depend on it have to be adjusted accordingly. Since the microservices may be very large, the cost of adjusting the interface will be significantly increased.
Repetitive work
For a single architecture, if a business is used by multiple modules, we can abstract it into a tool class that is called directly by all modules,but microservices cannot do so, because the tool class of this microservice cannot be called directly by other microservices, so we have to build such a tool class on each microservice, resulting in duplication of code.
8. Microservices Development Framework
At present, the development framework of microservices, the most commonly used are the following four:
Spring Cloud:http://projects.spring.io/spring-cloud (Now very popular microservice architecture)
Dubbo:http://dubbo.io
Dropwizard: http://www.dropwizard.io (Focus on the development of individual microservices)
Consul, etcd&etc.(Modules for microservices)
9. The difference between Sprint cloud and Sprint boot
Spring Boot:
Designed to simplify the creation of product-level Spring applications and services, it simplifies configuration files, uses embedded web servers, contains many out-of-the-box microservices capabilities, and can be deployed jointly with spring cloud.
Spring Cloud:
The Microservice toolkit provides developers with development kits for distributed system configuration management, service discovery, circuit breakers, intelligent routing, micro-agent, control bus and so on.
Second, microservices practice
1. How do clients access these Microservices services?(API Gateway)
The traditional way of development,all services are local, the UI can be called directly, now split into independent services by function, running in a separate Java process that is generally on a separate virtual machine.How does the client UI access his?There are N services in the background,the front desk needs to remember to manage N services,a service offline / update / upgrade, the front desk will be redeployed, which obviously does not serve our split concept, especially when the current desk is a mobile application, usually the pace of business changes is faster.In addition, N small service calls are not a small network overhead.There are also general microservices within the system, usually stateless, and user login information and rights management is best to have a unified local maintenance management (OAuth).
Therefore, generally between the N services in the background and the UI will generally be a proxy or called API Gateway,his role includes
Provide a unified service portal for microservices to be transparent to the foreground
Aggregate back-end services to save traffic and improve performance
Provide security, filtering, flow control and other API management functions
In fact, I understand that this API Gateway can have a lot of generalized implementation, it can be a soft and hard box, it can be a simple MVC framework, or even a Node.The server side of js.Their most important role is to provide an aggregation of background services for the foreground (usually mobile applications), provide a unified service exit, and de-coupling between them, but API Gateway can also become a single point of failure or a performance bottleneck.
2. How do Microservices communicate?(Service calls)
Because all microservices are independent Java processes running on independent virtual machines, so the traffic between services is IPC (inter process communication), there have been many mature programs.Now there are two ways to basically the most versatile.In these ways, you can write a book in terms of expansion, and we are generally familiar with the details, and we do not expand the talk.
REST(JAX-RS,Spring Boot)
RPC(Thrift, Dubbo)
Asynchronous message calls(Kafka, Notify)
General synchronous call is relatively simple, consistency is strong,but easy to call problems, performance experience will be worse, especially when the call level is more.The comparison between RESTful and RPC is also a very interesting topic.General REST based on HTTP, easier to implement, easier to be accepted, the server implementation technology is more flexible,each language can support, at the same time across the client, there are no special requirements for the client, as long as the package of HTTP SDK can be called, so relatively wide use.RPC also has its own advantages, the transport protocol is more efficient,more secure and controllable, especially in a company, if there is a unified development specification and a unified service framework, his development efficiency advantages are more obvious.Look at the actual conditions of their technical accumulation, their own choice.
The asynchronous message mode has a particularly wide range of applications in distributed systems, he can reduce the coupling between the calling services, but also become a buffer between calls, to ensure that the backlog of messages will not flush the callee, while ensuring the caller’s service experience, continue to do their own work, will not be slow down by background performance.However, the cost is to weaken the consistency, the need to accept the final consistency of the data; there is a background service generally to achieve idempotence, because the message is sent for performance considerations will generally be repeated(to ensure that the message is received and received only once is a great test of performance); and finally, the need to introduce an independent broker,if there is no technical accumulation within the company, the broker distributed management is also a great challenge.
3. How do you find so many services?(Service Discovery)
In the microservice architecture, each service generally has multiple copies to do load balancing.A service may go offline at any time,or it may add new service nodes to temporary access pressure.How do services perceive each other?How is the service managed?This is the problem with service discovery.There are generally two types of practices, but also have advantages and disadvantages.Basically, it is through zookeeper and other similar technologies to do distributed management of service registration information.When the service goes live, the service provider registers its service information to ZK(or similar framework) and maintains a long link through a heartbeat, updating the link information in real time.Service callers address through ZK, according to customizable algorithms, find a service, you can also cache the service information locally to improve performance.When the service is offline, ZK will send a notification to the service client.
Client-side: The advantage is that the architecture is simple,the extension is flexible, and only depends on the service registrar.The disadvantage is that the client has to maintain the address of all the calling services, there is technical difficulty, and the general large companies have mature internal framework support,such as Dubbo.
Server side: The advantage is simple,all services are transparent to the front-end caller, and applications deployed on cloud services in small companies are generally used more.
4. What if the service hangs up in Microservices?
The biggest feature of distributed is that the network is unreliable.This risk can be reduced through microservice splitting, but without special guarantees, the outcome is definitely a nightmare.We have just encountered an online failure is a very humble SQL counting function, when the number of visits increases, resulting in high database load, affecting the performance of the application, thus affecting all the foreground applications that call this application service.So when our system is composed of a series of service call chains, we must ensure that any link problem does not affect the overall link.
There are many corresponding means:
Downgrade (local caching) these methods are basically clear and generic, not detailed.
For example, Netflix’s Hystrix:https://github.com/Netflix/Hystrix
5. Issues to consider for Microservices
Here’s a very good graph summarizing the issues to consider in microservice architecture, including
API Gateway
Inter-service calls
Service Discovery
Service Fault Tolerance
Service Deployment
Data calls
Third, microservices important components
1. Microservices Basic Capabilities
2. Service Registry
Services need to create a service discovery mechanism to help services perceive each other’s existence.When the service starts, it will register its own service information to the registry and subscribe to the services it needs to consume.
The service registry is the core of service discovery.It holds the network addresses (IPAddress and Port) of each of the available service instances.The service registry must have high availability and real-time updates.The Netflix Eureka mentioned above is a service registry.It provides a REST API for service registration and query service information.The service registers its own IPAddress and Port by using a POST request.Every 30 seconds, a PUT request is sent to refresh the registration information.Log off the service with a DELETE request.The client obtains the available service instance information through a GET request. Netflix achieves high availability is achieved by running multiple instances on Amazon EC2,with each Eureka service having an elastic IP Address.When the Eureka service starts, there is dynamic allocation of DNS servers.The Eureka client obtains the network address (IP Address and Port) of Eureka by querying DNS.In general, the Eureka server address is returned and the client is in the same availability zone. Others that can act as a service registry are:
etcd-highly available, distributed, strongly consistent, key-value, Kubernetes, and Cloud Foundry all use etcd.
consul-a tool for discovering and configuring.It provides an API that allows clients to register and discover services.Consul can perform a service health check to determine the availability of the service.
zookeeper — widely used in distributed applications, high-performance coordination services. Apache Zookeeper was originally a subproject of Hadoop,but is now a top-level project.
2.1 zookeeper service registration and discovery
In simple terms, zookeeper can act as a service Registry, allowing multiple service providers to form a cluster, allowing service consumers to obtain specific service access addresses (ip+ports) through the service registry to access specific service providers.As shown in the following figure:
Specifically, the zookeeper is a distributed file system, whenever a service provider after deployment to their services registered to The zookeeper of a way on the PATH: /{service}/{version}/{ip:port}, such as our HelloWorldService deployed to the two machines, then the zookeeper will create two entries recorded: were/HelloWorldService/1.0.0/100.19.20.01:16888 /HelloWorldService/1.0.0/100.19.20.02:16888。
zookeeper provides a “heartbeat detection” function, it will periodically send a request to each service provider(in fact, a long socket connection is established), if there is no response for a long time, the service center will think that the service provider has“hung up”, and cull it, for example, 100.19.20.02 If the machine is down, then the path on zookeeper will be only/HelloWorldService/1.0.0/100.19.20.01:16888.
The service consumer will listen to the corresponding path (/HelloWorldService/1.0.0), once the data on the path has a task change (increase or decrease), zookeeper will notify the service consumer service provider address list has changed, so as to update.
More importantly, zookeeper’s innate fault-tolerant and disaster-tolerant capabilities (such as leader elections) ensure high availability of the service registry.
3. Load Balancing
In order to ensure high availability, each microservice needs to deploy multiple service instances to provide services.At this point, the client performs load balancing of the service.
3.1 Common Strategies for Load Balancing
3.1.1 Random
The request from the network is randomly assigned to multiple servers in the internal.
3.1.2 Polling
Each request from the network, in turn assigned to the internal server, from 1 to N and then start over.This load balancing algorithm is suitable for servers within the server group have the same configuration and the average service request is relatively balanced.
3.1.3 Weighted Polling
According to the different processing power of the server, assign different weights to each server, so that it can accept the corresponding number of weights of the service request.For example: the weight of the server A is designed to be 1, the weight of B is 3, the weight of C is 6, the server A, B, C will receive 10%, 30%, 60% of the service request.This equalization algorithm can ensure that high-performance servers get more usage, to avoid low-performance servers overloaded.
3.1.4 IP Hash
This way by generating a hash value of the request source IP, and through this hash value to find the correct real server.This means that his corresponding server is always the same for the same host.In this way, you do not need to save any source IP.However, it is important to note that this approach may result in an unbalanced server load.
3.1.5 Minimum number of connections
The time spent on the server for each request of the client may vary greatly. With the lengthening of the working time, if a simple round robin or random balancing algorithm is used, the connection process on each server may vary greatly and does not achieve true load balancing.The minimum number of connections balancing algorithm has a data record for each server that needs to load internally, recording the number of connections currently being processed by the server. When there is a new service connection request, the current request will be assigned to the server with the least number of connections, so that the balance is more in line with the actual situation and the load is more balanced.This equalization algorithm is suitable for long-term processing of request services,such as FTP.
4. Fault tolerance
Fault tolerance, the understanding of the word, is to accommodate the error, do not let the error expand again, let the impact of the error within a fixed boundary,”a thousand miles of embankment destroyed in the nest ” The way we use fault tolerance is to make the nest do not grow large.Then our common downgrades, current limiting, fuses, timeout retry, etc. are fault-tolerant methods.
When calling a service cluster, if a microservice invokes exceptions, such as timeouts, connection exceptions, network exceptions, etc., the service fault tolerance is made according to the fault tolerance policy.Currently supported service fault tolerance policies have fast failure, failure switching.If it fails multiple times in a row, it fuses directly and no longer initiates the call.This prevents a service exception from draining all services that depend on him.
4.1 Fault Tolerance Policy
4.1.1 Fast Failure
The service only initiates a stand-by, and the failure immediately reports an error.Typically used for write operations that are not idempotent
4.1.2 Failover
The service initiates a call, and when a failure occurs, retry the other server.Usually used for read operations, but retry brings a longer delay.The number of retries can usually be set
4.1.3 Failure Security
Fail safe, when the service call has an exception, it is ignored directly.Typically used for operations such as writing logs.
4.1.4 Automatic recovery of failures
When an exception occurs in a service call, a failed request is logged and a regular retransmission is made.Typically used for message notifications.
4.1.5 forking Cluster
Multiple servers are called in parallel, and as long as there is one success, it is returned.Usually used for high real-time read operations.The maximum number of parallelism can be set by forks=n.
4.1.6 Broadcast Calls
The broadcast calls all providers, one by one, and any failure fails.It is typically used to notify all providers of updates to local resource information such as caches or logs.
5. Fusing
Fuse technology can be said to be a kind of”intelligent fault tolerance”, when the call meets the number of failures, the failure ratio will trigger the fuse to open, there is a program to automatically cut off the current RPC call, to prevent further expansion of the error.To achieve a fuse is mainly to consider three modes, off, open, half open.The transition of each state is shown below.
When we deal with exceptions, we have to decide how to handle them according to the specific business situation. For example, we call the commodity interface, the other party only temporarily does the downgrade process, then as a gateway call, we have to cut to the alternative service to perform or get the bottom data, and give user-friendly tips.There is also a need to distinguish the type of exception, such as the dependent service crashes, which may take a long time to solve.It may also be that the server load is temporarily too high, resulting in a timeout.As a fuse should be able to identify this type of exception, so as to adjust the fuse strategy according to the specific type of error.Added manual settings, in the case of failed service recovery time is uncertain, the administrator can manually force the switch fuse state.Finally, the fuse usage scenario is to call a remote service program or shared resource that may fail.If local private resources are cached locally, the use of fuses increases the overhead of the system.Also note that fuses cannot be used as an exception handling substitute for business logic in your application.
Some exceptions are stubborn, sudden, unpredictable, and difficult to recover, and can also lead to cascading failures (for example, suppose a service cluster load is very high, if a part of the cluster hangs up at this time, but also accounts for a large part of the resources, the entire cluster may suffer).If we continue to retry at this time, the result is mostly a failure.Therefore, at this time our application needs to immediately enter the failure state(fast-fail), and take the appropriate method for recovery.
We can use a state machine to implement CircuitBreaker, which has the following three states:
Closed: Circuit Breaker is closed by default, allowing the operation to be executed.CircuitBreaker internally records the number of recent failures, and if the corresponding operation fails, the number will continue once.CircuitBreaker transitions to the Open state if the number of failures( or the failure rate )reaches a threshold within a certain period of time.In the on state, Circuit Breaker enables a timeout timer that is set to give the cluster the appropriate time to recover from the failure.When the timer time comes, CircuitBreaker will switch to the Half-Open (Half-Open )state.
Open: In this state, the execution of the corresponding operation will fail immediately and an exception will be thrown immediately.
Half-Open:In this state, Circuit Breaker allows a certain number of operations to be performed.If all operations succeed, CircuitBreaker assumes that the failure has been restored,it transitions to a closed state, and resets the number of failures.If any of these operations fail, Circuit Breaker will assume that the fault still exists, so it will switch to the on state and turn the timer on again(giving the system some more time to recover from the failure)
6. Current limiting and downgrading
Ensure the stability of core services.In order to ensure the stability of the core service, with the increasing number of visits, you need to set a limit threshold for the number of services the system can handle, more than this threshold request is directly rejected.At the same time, in order to ensure the availability of core services, you can downgrade some non-core services,by limiting the maximum number of traffic to the service to limit the flow, through the management console for a single microservice manual downgrade.
7. SLA in Microservices
SLA: Short for Service-LevelAgreement, which means Service level Agreement. A contract between a network service provider and a customer that defines terms such as service type, quality of service, and customer payment.
A typical SLA includes the following items:
8. API Gateways
The gateway here refers to the API gateway, which means that all API calls are unified access to the API gateway layer,and there is a unified access and output of the gateway layer.The basic functions of a gateway are: unified access, security protection, protocol adaptation, traffic control, long-and long-link support, fault tolerance.With the gateway, each API service provider team can focus on their own business logic processing, while the API gateway is more focused on security, traffic, routing and other issues.
9. Multi-level caching
The simplest cache is to look up the database once and then write the data to the cache, such as redis, and set the expiration time.Because there is expiration, we should pay attention to the penetration rate of the cache.The penetration rate calculation formula,such as query method queryOrder(number of calls 1000/1s)inside the nested query DB method queryProductFromDb(number of calls 300/s), then the penetration rate of redis is 300/1000, in this way of using the cache, it is necessary to pay attention to the penetration rate, the penetration rate is large, indicating that the effect of the cache is not good.Another way to use the cache is to persist the cache,that is, do not set the expiration time, which will face a data update problem.In general, there are two ways, one is to use the timestamp, the query is based on redis by default, each time you set the data into a timestamp, each time you read the data with the current time of the system and the last set of this timestamp to do comparison, such as more than 5 minutes, then check the database again.This can ensure that there is always data in redis, which is generally a fault-tolerant method for DB.The other is to really let redis be used as a DB.The binlog, which subscribes to the database, pushes the data to the cache through the data heterogeneous system,and sets the cache to multi-level.You can use jvmcache as a first-level cache in the application, generally small size, access frequency is more suitable for this jvmcache mode, a set of redis as a second-level remote cache,in addition to the outermost three-level redis as a persistent cache.
10. Timeouts and retry
Timeout and retry mechanism is also a method of fault tolerance, where RPC calls occur, such as reading redis, db, mq, etc., because the network failure or the dependent service failure, can not return the result for a long time, it will lead to increased threads, increased cpu load, and even lead to an avalanche.So set the timeout for each RPC call.For the case of strong dependence on RPC call resources, there must be a retry mechanism, but the number of retry is recommended 1-2 times, in addition, if there is a retry, then the timeout time should be reduced accordingly, such as retry 1 time, then a total of 2 calls occur.If the timeout is configured for 2s, then the client will have to wait for 4s to return. Therefore, retry + timeout mode, the timeout time should be reduced.Here also talk about a PRC call time is consumed in which links, a normal call statistics of time including: ① call-side RPC framework execution time + ② network transmission time + ③Server-side RPC framework execution time + ④server-side business code time.The caller and the service side have their own performance monitoring,such as the caller tp99 is 500ms, the service side tp99 is 100ms, find the network group colleagues to confirm that the network is no problem.So where is the time spent? There are two reasons, the client caller,and one reason is that TCP retransmission occurs on the network.So pay attention to these two points.
11. Thread pool Isolation
In this aspect of resistance, when Servlet3 is asynchronous, thread isolation has been mentioned. The advantage between thread isolation is to prevent cascading failures or even avalanches.When the gateway calls N more than one interface service, we need to thread isolation for each interface.For example, we have to call orders, goods, users. Then the order of the business can not affect the processing of goods and user requests.If you do not do thread isolation, when the access order service network failure leads to delay, the thread backlog eventually leads to the full-service CPU load.That is, we say that all the services are not available, how many machines will be stuffed with requests at the moment. Then with thread isolation will make our gateway can ensure that local problems will not affect the global.
12. Downgrade and current limiting
The industry HAS A VERY MATURE APPROACH TO DOWNGRADE CURRENT LIMITING, SUCH AS FAILBACK MECHANISM, CURRENT LIMITING METHOD TOKEN BUCKET, DRAIN BUCKET, SEMAPHORE AND SO ON. Let’s talk about some of our experience here, the downgrade is generally achieved by the unified configuration center downgrade switch, then when there are many interfaces from the same provider, the provider’s system or the machine room network where there is a problem, we have to have a unified downgrade switch, otherwise, it will be an interface to downgrade. That is, to have a large knife on the type of business. There is the downgrade remember violence downgrade, what is the downgrade of violence, such as the forum function down, the results of the user show a large whiteboard, we want to achieve the cache of some data, that is, there is a bottom data.If the distributed current limit is realized, a common back-end storage service, such as redis, is required to read redis configuration information using lua on large nginx nodes.Our current limit is a stand-alone current limit, and did not implement distributed current limit.
13. Gateway Monitoring and Statistics
API gateway is a serial call,then every step of the occurrence of exceptions should be recorded, unified storage in a place such as elasticserach, to facilitate the subsequent analysis of the call exception.Given that the company’s docker applications are all unified distribution, and there are already 3 agents on docker before the distribution, it is no longer allowed to increase.We have implemented an agent program to collect the log output from the server, and then send it to the kafka cluster, and then consume it to the elasticserach, and query it through the web.Now do the tracking function is relatively simple,this piece also needs to continue to be rich.
]]>Earnings per share were 1.71 US dollars, an increase of 101.2% compared with 0.85 US dollars last year.
In terms of revenue composition, Facebook’s revenue from advertising in the first quarter was US $ 17.440 billion, an increase of 17% from US $ 14.912 billion in the same period last year.
Facebook’s first-quarter revenue from payment and other service fees was the US $ 297 million, compared with the same period last year of US $ 165 million increased by 80%.
Facebook’s daily active users in the first quarter were 1.73 billion, an increase of 11% year-on-year: monthly active users were 2.6 billion, an increase of 10% year-on-year.
Facebook services (which includes services such as Facebook, Instagram, WhatsApp and Messenger) has an average daily active user count of 2.36 billion, an increase of 12% over the same period last year.
Facebook ’s average revenue per user in the first quarter was $ 6.95, an increase of 19% compared to last year ’s $ 6.09, and analysts generally predicted it to be $ 7.09.
As of March 31, 2020, Facebook ’s cash, cash equivalents, and negotiable securities were $ 60.29 billion.
Facebook founder and CEO Mark Zuckerberg said: “Our job has always been to help people stay in touch with the people they care about. As people rely on our services more than ever, We are focused on continuing to keep people safe and keep them informed and connected. ”
Codeless / low-code programming came started gaining market currency around 2013-14. No code / low code is a way to create applications that allows developers to quickly develop applications with minimal coding knowledge. Developers assemble and configure applications through a graphical interface and visual modeling. In this way, the developer skips all the middle layer directly, the visual code block already contains 90% of the functions required by most applications, and the developer only focuses on the remaining 10% of the code logic.
As a result, some developers inevitably have a new sense of crisis: With the advent of codeless / low-code programming, are programmers going to lose their jobs? So what are we talking about when we talk about codeless / low-code programming?
At the beginning, everyone may think that the low-code development platform is similar to an IDE and integrates some tools to improve research and development efficiency. In fact, low-code platforms provide more capabilities than IDEs. Low-code development turns programming into “building blocks” and modularizes common code. Developers can drag and drop through graphical interfaces to complete application development.
This Saves time for developers to write code by hand and control application construction flexibly. In this way, developers can complete application development with very little code. Low-code platforms can not only integrate software development into other fields, but also allow enterprises in other fields to enter software development, accelerating the digital transformation of enterprises.
Low code has the following advantages:
First, quickly complete steps from demand to application . Developers can build applications for multiple platforms at the same time. Demos can be completed within days or even hours, saving development costs.
Second, reduce the complexity of research and development, and reduce the difficulty of building large-scale systems. The low-code platform framework itself handles a certain complexity, and has built-in security processes, data integration, and support for cross-platform, reducing the need for developers to manually write code repeatedly. Developers can focus on the implementation of key business logic.
Third, the low-code platform integrates the mainstream architecture, which helps accomplish rapid deployment , and also accomplish secondary software development configuration and multiple configuration development.
As early as 1982, in James Martin’s published paper “APP Development Without Program”, he proposed the idea of building applications without writing programs. Today, many IT companies are competing for the low-code market, making the above assumptions possible: such as Jinyun, a strategic investment in China, H3 BPM launched by Orzhe in 2010, and Nine Chapters Fully Collaborative Cloud with Cloud, and Google Apps abroad Companies such as Maker, Microsoft’s Power Platform, Mendix, Salesforce, etc., have laid out the low-code market.
According to a Forrester Research report, the low-code development platform market will reach $ 15.5 billion by 2020 , showing that the low-code development market is hot. In another report by Forrester Research, about 100 vendors are grabbing the market, of which Microsoft ranked first in the statistics of “who is your low-code vendor” in 2018 and 2019.
The new Coronavirus continues to ravage around the world- concern is not just to the rapid spread of the virus itself, but also the effect forces more and more countries to introduce strict control measures, people hoard important materials and prepare for long-term self-isolation. So, what specific efforts have the major tech giants made to alleviate the social impact of the disease, especially in the context of countries and companies that encourage people to work from home?
At present, the most effective means of controlling the outbreak is to avoid frequent contact between people as much as possible; therefore, social media has become an increasingly important channel for news communication and human contact. Several social media giants are taking steps to limit the spread of negative misinformation and false content related to the epidemic, while ensuring a channel of communication.
Facebook recently revealed that they will ban any ads that claim to “cure” the new Coronavirus infection, while working to enable users to connect with the World Health Organization and other local government/officialinformation publishers. In addition, Facebook provided a $ 20 million donation and provided free advertising space for WHO to convey important information.
At the same time, Instagram added a notice above each feed, listing links to reliable medical information sources, and prohibiting any non-authoritative organization from posting all virus-related content within the app. On Twitter , if users search for tags related to the new Coronavirus, the system will give a similar notification and provide a link to the local health department (such as the US Centers for Disease Control and Prevention). Twitter is also working to remove all conspiracy-related tweets and warn corporate customers to manage this activity.
Combining its own apps and websites, Amazon helps people to obtain the current fair prices and correct purchase channels of masks, toilet paper and hand sanitizers as quickly and easily as possible. The company, like Twitter, Facebook, Google, and Microsoft, encourages employees to work from home. Amazon has previously provided two-week paid holidays for employees who have been diagnosed with infection, but many partners have expressed dissatisfaction, saying that in actual implementation, Supermarket (owned by Amazon) encourages employees to treat themselves as normal “Paid leave” was donated to infected employees to “reduce sick leave”.
In addition to adapting its this year’s Global Developers Conference to an online live broadcast format, Apple is also actively removing all types of applications without official health organization and government backgrounds to prevent them from wantonly publishing misinformation related to the new Coronavirus. Apple has also closed many physical stores, and at the same time stopped the trial service of AirPods and Apple Watch to avoid cross-infection among people.
Similar to Apple, Google has also started working in its Play Store app store to clean up all applications that may spread false information. Google has also established a professional knowledge base to ensure that the correct information related to the COVID-19 virus always ranks high in the search engines; they also plan to cooperate with the US government to establish a special website to provide the public with information related to the local epidemic and health resources Information broadcast service. In the end, Google pledged a $ 7.5 million donation to fund the COVID-19 Joint Emergency Fund established by WHO.
Other companies are also working to release news related to virus infections and provide advice and guidance to the public.Reddit co-founder Alexis Ohanian rented a billboard in Times Square, urging people to stay at home until “the epidemic has eased.”
In addition, companies are also controlling travel as much as possible. Airbnb has announced that it will launch an booking cancel service within the next month, which means that any individual who cancels a booking for any reason, no matter how long it is before the departure time, will not cause any booking losses. Uber said this week that they will provide two weeks of financial assistance to those driver groups who have contributed to the quarantine. Grubhub will no longer charge marketing costs to restaurants, helping those operators who have drastically reduced their customers’ footfall.
As most people started staying at home, the popularity of food delivery services began to increase rapidly. Postmates has added new contactless shipping options to its app; Instacart has also launched a new “delivery right at the door” method during the epidemic. DoorDash and Grubhub also encourage users to communicate with the courier over the phone, and try to complete meal transfers as much as possible while avoiding encounters.
Yes, major technology companies have taken action to ensure that we have the correct information on the epidemic and help us stay at home .
Every stage of the communications revolution we have experienced over the past 50 years has benefited from the advancement of the Technology whether it is internet, cloud, networks or web. This also applies to the Technological revolution currently under way in for of 5G rollouts and 5G testing around the world.. The modern technological advancements like Internet of Things, consumerization or digitalization, and artificial intelligence(AI) all rely on fast, low-cost, and reliable networks.
Every day, more and more devices are actively communicating with each other, and simple upgrades to the current network are no longer sufficient to support the growing traffic and sophistication. 4G networks has reached the technical limit of how much data can be transmitted especially due to consumption boost in internet based services like gaming, video, remote working etc.
This is where 5G comes into picture. 5G offers boost in possible solutions to address market needs and a range of new opportunities for telecom, gaming, network, software, SaaS, IT infrastructure companies but this transition to 5G will be expensive. Telecom operators who choose to build new small or large communication units will face sharp increase in network costs for 5G rollouts or upgrades. Failure to move to 5G will result in cannibalisation of unsuccessful players in telecom.
Network sharing has become a standard part of the telecom operator’s operating model, and the huge expansion of the network infrastructure required to successfully launch 5G will accelerate this trend. There are already different ways to implement network sharing, and each method has its own impact on the business model of the operator.
What is 5G? 5G is the successor to 4G. 5G uses higher radio frequencies and provides higher speeds, lower latency, and finer coverage.
One advantage of using high-frequency radio waves is that more equipment can be used in the same area. Although 4G can support up to 4,000 devices per square kilometer, 5G can support up to 400,000 devices. This opens up a whole new world for the Internet of Things, where device density in some regions can be very high.
One disadvantage of using high-frequency radio waves is that they cannot travel as far as the low-frequency radio waves used by 4G. Therefore, 5G networks introduce a new technology called mMIMO (Massive Multiple Input Multiple Output), which uses a large number of antennas. In addition to supporting the required additional antennas, mMIMO also provides the ability to send and receive multiple data signals at once. This makes it possible to use target data streams to follow users and track them. Therefore, mMIMO can serve more users and devices at the same time in a smaller area, while maintaining fast data rates and consistent performance.
Unlike previous generations of mobile networks, 5G will not require a single operator to provide nationwide excess services through its own infrastructure. In fact, network sharing is ideal for 5G. As a result, telcos are pushing for network sharing and software-defined networking as “native” components of the solution. Although the transition from “private” network infrastructure to “shared” network infrastructure will require network operators to adopt a new way of working, this will also increase the speed of 5G deployment and reduce the cost of promotion , And eliminate visual pollution by reducing the number of outdoor antennas required.
5G will help various industries in improving their services and cater to customers and citizens’ needs in better way
1) Governments will be able to control and direct traffic and public transportation in a better way with better use of sensors and networks and interconnected devices in Intelligent Internet of things.
2) Automotive- With advent of autonomous and semi autonomous vehicles low latency computers in Automotive vehicles will be better able to communicate their current state and their sensors will be able to detect and process presence of nearby obstructions, traffic lights, vehicles etc in more better way for decisions.
3) Retail- One of the biggest issue facing retail and consumer goods companies since ages is realtime availability of information of inventory for replenishment specially during natural or unnatural disasters which are growing day by day. And first time in history of networks and analytics this feature is going to be available due to 5G. When we say countries like china are truly moving towards cashless society , reason behind that is Chinese consumers are able to do smallest transactions digitally. In retail cashless checkout is a norm in China. These services will be available in cheaper and more cost affective way due to 5g.
4) Manufacturing :- One thing any technology practitioner is going to tell you about manufacturing in coming decades is that factory floors , warehouses , supply chain are going to be completely transformed due to convergence of 5G, AI, IoT and Robotics. 5G will help manufacturers control and analyse industrial processes with unprecedented degree of precision and accuracy.
to sum up
After 20 years of continuous development and evolution, the network has become the core technology of our continuously connected world. The availability of standards-compliant, compatible networks has changed the way people expect to communicate with each other and the way information is shared today. And 5G enables the network to keep up with the future needs and expectations of consumers and the industry.
The investment required to deploy 5G is considerably high. To maximize the return on investment, a clear vision and a clear strategy are required. In addition, the key to success is cooperation between all stakeholders and the transition to network sharing. And these are what we must prepare immediately.
In this article, we will discuss whether edge computing will kill cloud computing. In addition, you will learn the advantages and disadvantages of each technology. Let’s delve into the future of edge computing and cloud computing to discover their perspectives and their impact on the IT industry.
The implementation of this technology can solve some of the current key issues. Data transfer from the edge device to the server for processing takes a lot of time. Although this delay does not seem severe, every millisecond is important. In addition, it is necessary to reduce the pressure on the bandwidth. In another case, high traffic and long distances can significantly reduce network speed. Unfortunately, cloud computing cannot currently solve these problems.
This is why there is no doubt that edge computing has more advantages than cloud computing. First, you can use it to process time-sensitive data. This technology relocates critical data processing to the edge of the network, thereby reducing information processing delays. Because of this feature, edge computing is ideal for applications that require immediate response. This will make them more robust and load faster.
In addition, edge computing may be more popular than cloud computing because it ensures lower management costs. Sending the most important data over short distances can save a lot of network and computing resources.
Similarly, if various smart devices use edge computing instead of cloud computing, there may be many benefits. The technology will guarantee an immediate response and provide the opportunity to perform accurate, fast calculations. This feature is particularly useful for the development of autonomous vehicles.
Moreover, this technology offers new opportunities for all large content providers such as Netflix or Amazon. These companies are interested in the development of streaming technology, and edge computing has become the preferred solution. This method allows users to access their favorite programs faster. In addition, these companies have the opportunity to expand their services without sacrificing current performance.
Unlike edge computing, if you use cloud computing, all data will be processed and stored in a remote data center or server. Any device or application that needs to access this information must be connected to the cloud.
Still, this technology can bring many benefits to companies using standard server networks. With the use of cloud computing, such organizations can ensure that only authorized users can access stored data.
According to Cisco’s global research, both humans and machines are expected to generate about 800 ZB of information by 2020. There is no doubt that only cloud servers can store such a large amount of data.
In addition, research shows that especially Internet of Things (IoT) devices will produce more than 80 megabytes of data. Unfortunately, much of the information will be deleted because the device does not have enough storage space to hold the received data.
Although the main drawback of cloud computing is its lack of speed, the technology provides amazing processing power and storage space. These features will undoubtedly continue to be useful for small business owners and those who do not perform time-critical tasks.
Keep in mind that it may sometimes be necessary to use both techniques for better results. The combination of edge computing and cloud computing can provide you with the opportunity to maximize its potential while reducing its disadvantages.
Following this approach will open up new horizons especially for IT and AI companies. IoT devices will be able to run faster and process data efficiently without losing storage capacity and processing power. Of course, edge computing now seems to have more advantages than cloud computing, but you should not underestimate the advantages of the latter.
Today, the future of the network seems to be somewhere between edge computing and cloud computing. There is no doubt that leading companies will find a way to get more out of these technologies and overcome their disadvantages. It will open up possibilities for improving many services and driving innovation.
At the same time, some users predict the end of cloud computing. However, many experts believe in the future of this technology. Also, for now, no analytics framework can prove that cloud computing has become less popular.
Although edge computing provides solutions to many challenges, cloud computing remains an important part of the IT industry. Today, these two technologies are still important and provide data analysis solutions for a variety of organizations.
Edge computing and cloud computing are different technologies that cannot be replaced with each other. Each of them is important to the IT industry and allows different methods to be used to achieve greater returns. But today, many companies are turning to edge technology because it offers more opportunities than cloud computing.
The decision on one of these technologies depends on the company’s goals and needs. Try to find out if edge computing can help you overcome existing technical challenges and make sure it can serve your business.
Remember that technology is constantly changing and companies must adapt to new trends. However, you should always choose only those solutions that help improve your business. In addition, you can always choose to use both edge and cloud computing to achieve your company’s goals.
The characteristics of the blockchain: building an ideal “fortress of trust”
In the information society, we face massive amounts of fragmented information every day. This information comes from different media, through various channels, and may have undergone several “processing”, occupying your senses with unbridled expression. In daily life, you can of course choose to block or selectively obtain, but when you want to pursue the truth of an event and master complete and true information, you will find that this is actually a huge challenge.
Not only that, most people will not give up the need to restore the truth in extraordinary times. Especially in the face of sudden crisis events such as this epidemic, collective anxiety may even exacerbate this impulse, forcing people to use the most primitive ways to stitch fragments when they lack the necessary scientific tools. This also led to rumors at the beginning of the epidemic, about the spread of the epidemic, the number of people infected, the way of transmission, the control methods, various pieces of public opinion, expert advice, and platform information of mainstream and non-mainstream media. Very loud. There are also many questions raised by the Red Cross Society of Hubei Province not long ago. The Red Cross Society, which is responsible for receiving and distributing aid materials, was involved in a storm of public opinion due to the distribution of materials and opacity of information.
When uneven pieces of information are rushed together, those parts that can’t fit and join can easily become the focus of attention, especially when guessing can’t fill the cracks in logic and imagination, those parts may become Group panic breeds soil. Once the trust crisis breaks out, it is undoubtedly more difficult to rebuild trust. The seeds of doubt are always endless. In addition, in this epidemic, in addition to being authentic and credible, real-time nature is also one of the requirements of the public for information. The lagging information obviously cannot satisfy the public’s grasp of the situation, neither can it serve the implementation of the solution, nor can it facilitate the implementation of public supervision.
In response to this crisis, blockchain technology can make a difference in the construction of “credibility”.
1. Integrate information fragments and present data panorama. To respond to the epidemic, we need multisectoral, full-process, real-time data and information. At the same time, this information can be made available to the public to ensure the public’s right to know. The blockchain can improve the ability to respond to crisis events by sharing data among all participants, and can automate data sharing and storage in different levels of organizations, thereby realizing data integration throughout the process and showing a panoramic view of the data.
2. Open and transparent, not tamperable.
Based on the data structure in the block, any data in the block is modified, so that the lower layer and the upper layer hash value in this block cannot correspond. In addition, any read, write, delete, and retrieve activities on the chain will leave marks. From this perspective, blockchain technology achieves openness and transparency of information and cannot be tampered with. If the infection cases in this epidemic are chained, information such as the onset time, disease changes, and treatment progress will be recorded on the chain, so that medical personnel and scientific researchers around the world can also grasp the trends in the non-infected areas to conduct research. It is also very beneficial to be able to break through the virus attack as soon as possible. At the same time, because the blockchain uses multi-party node authentication, each authenticated node must bear certain responsibilities, and it will also promote the collaboration efficiency of various departments. In the framework, at least each department, as a node on the chain, can clarify its own job responsibilities and responsibilities, rather than perfunctory responsibility on the ambiguous edge.
3. The whole process can be traced, which is easy for accountability.
The introduction of the blockchain traceability mechanism, the accountability of this accountability, and the improvement of this improvement are conducive to the establishment of a society-wide prevention and control system. To the outbreak of material donations and distributed, for example, if the block in the chain which provides technical support, all the money and materials can be the first to achieve information on the chain of time. This can effectively get rid of the intermediary role of the middleman as a transfer channel for a large amount of assistance, thereby reducing problems such as misappropriation and theft. Because no matter how many links the material goes from the donor to the recipient, there are records to follow, and the problem of information breakage between logistics, warehousing, and distribution will not occur. Where is the donated material, whether it is distributed in a timely manner, where is the stall, and who is the ultimate beneficiary of the donated money, whether it has become a bucket of water, a bag of instant noodles, or a protective clothing, as a donor can be clear, no information is in the black box.
Many issues of trust were exposed during the public health crisis. “Credibility” is not only a moral concept of justice, but also a cornerstone of social affairs management. Increasing the public’s trust in the government, so as to cooperate with the government to do a good job in epidemic prevention and control, is the right way. Only through transparent and open information, clear coordination of powers and responsibilities, and active and traceable policy implementation can the government’s macro-coordination role be better exerted.
The reality of blockchain: large-scale application is still a long way to go
Blockchain as a cutting-edge technology toolbox is still in the early stages of development, and there is application in large scale. The popularity of the blockchain is relatively low. From a practical point of view, to promote the reality of the blockchain, certain conditions must also be met. From the perspective of epidemic prevention and control, the support and cooperation of multiple parties are still needed.
First, because blockchain requires the cooperation of participants, for example, donors and hospitals must have corresponding terminals. At the same time, in order to ensure the authenticity of the on-chain information, verifying the authenticity of the information is also a major challenge.
Second, because the links involved in the blockchain are extremely complex, its applications must be supported by infrastructure such as big data and cloud computing. This requires the government to coordinate the cooperation of various departments and open the corresponding big data interface, which requires that it be reflected in the top-level design.
Third, because the nature of distributed storage block chain requirements, the design of distributed nodes need to have professionals come to support. Relevant talent reserves still need to accumulate for a period of time.
Fourth, because world has a vast geographical area and regional economic and social development is still unbalanced, in the process of implementing the blockchain, the popularity of hardware facilities needs to wait for the opportunity. For example, to trace the trajectory of the infected person, a large number of cameras and local network connections are needed. At present, these infrastructures need to be improved, especially in relatively remote areas.
Brain–computer interface (BCI) uses devices that enable their users to interact with computers and machines by using only their brain activity, which is measured and processed by the system.
Many BCI applications currently exist, allowing users to perform tasks such as writing sentences byselecting letters, moving a cursor on a computer screen, playing an electronic ping-pong game, and controlling an orthosis that provides a graspable hand.
BCIs are also used to study the human brain in relation to performance at work, transportation, and other everyday settings, which can provide important guidelines and constraints for theories of information presentation and task design. These research approaches also aim at applications that are not necessarily in the clinical field and forimpaired users. It is making BCI use possible for new potential user groups such as gamers and for applications in the domestic domain, human–computer interaction, robotics, and team performance.
Image Credit :- Brain-Computer Interfaces Handbook: Technological and Theoretical Advances book – by Chang S. Nam, Anton Nijholt, Fabien Lotte
Brain–computer interfaces (BCIs) are systems that translate a measure of a user’s brain activity into messages or commands for an interactive application. A typical example of a BCI is a system that enables a user to move a ball on a computer screen toward the left or toward the right, by imagining left- or right-hand movement, respectively. The very term BCI was coined in the 1970s, and since then, interest and research efforts in BCIs have grown tremendously, with possibly hundreds of laboratories around the world studying this topic.