Had Not Attempted Reverse Range in Python? Grasp Some Functions so They Help You

Had Not Attempted Reverse Range in Python? Grasp Some Functions so They Help You

Python reverse range

Reverse Range in Python

Before we begin knowing reverse range in Python, we need to see how the loop in Python works. Knowing Python for in range function is very useful in all reverse code Python. So there is an instance:

sentence = ['Harry', 'is', 'happy!']
 
for iterable in sentence:
    print(iterable)
    
#Output:
#Harry
#is
#happy!

A loop is used for a step-by-step walkthrough. For example: walking through a list, string or array, etc. Python has a very easy style of the loop, (not like C-style: for (i=0; i<n; i++)), and there is a “for in” loop similar to the for loop in other languages. 

In our example we can clearly see the syntax, we have ‘iterable’ that we use to walk through our ‘sentence’. Iterable takes 0 at start, which means the first element in the array. And then 1 and 2. That’s how we receive the output: ‘Harry is happy!’.

Here let`s see one more example before we can reverse the list in Python:

received_data = [15, 6, 2, 12, 54]
result = []

for number in received_data:
temp = number / 3
result.append(temp)

print(result)
#Output:
#[5.0, 2.0, 0.6666666666666666, 4.0, 18.0]

So our loop is used to make a new list ‘result’ that has elements divided by three. Our iterable now calls ‘number’ because it takes elements in ‘received_data’ and then one-by-one divides and appends to the new list.

What is for range Python`s function?

To start this we need to start with its history. The history starts with Python 2. Python 2 was a good version of Python, but now some of its decisions look very weird. For example, one of them is for the Python range. The range in Python 2 is not the same as Python for range() in Python 3. Python 3 range() method is equivalent to Python 2 xrange() method but not range(). In Python 3 most functions return iterable objects but not lists as in Python 2. That was in order to save memory. Some of those are zip() filter() map() including .keys .values .items() dictionary methods.

So the xrange() method in Python 2 was more effective in creating a huge listing of reverse numbers in Python, and that’s why now we have it as an inbuilt range() function in Python 3.

dictionary methods.

So the xrange() method in Python 2 was more effective in creating a huge listing of reverse number in Python, and that’s why now we have it as an inbuilt range() function in Python 3.

How do we use the range() function?

Python a range() means that you can create a series of numbers in the given range. Passing arguments to the function helps you to decide where that series of numbers can begin and where they can end, the same as how big the difference will be between the first number and the next one.

Let’s see a Python print range() code example:

for iterable in range(10,70,10):
division=iterable/10
print(f'{iterable} : 10 = {int(division)}')

In this for loop, thanks to range() and ‘for’ functions we simply created a range Python of numbers that are divisible by 10, so you didn’t have to provide each of them yourself.

So as we have seen in the example, the inbuilt Python in range function is usually frowned upon to be used often in for-loops.

Range() is a good choice when you want to create iterables of numbers, but actually is not great when you iterate over some data. If you want to do that it will be better to loop over it using the “in” operator.

sentence = ['Harry', 'is', 'happy!']

for iterable in sentence:
print(iterable)

#Output:
#Harry
#is
#happy!

There are three types of the syntax of range() functions:

  • When range takes only stop argument: range(stop)
  • When range takes to start and stop arguments: range (start, stop)
  • And when you can use steps: range(start, stop, step)

Stop argument in range()

When you type only one argument in range, that means that you want to stop your series of numbers in that argument you typed. In other words, you will get a series of numbers that start from 0 and include all numbers in the row but not include the number that you provided as an argument.

Let’s see the example of range 1 to 10 python:

for iterable in range(10):
print(iterable)

#Output:
#0
#1
#2
#3
#4
#5
#6
#7
#8
#9
#10

And here we see that we had got all numbers but not our argument that we typed in. So that means “4” here is our stop argument that is not included.

Start argument in range()

When you need to start an argument one of the ways is to type two arguments in brackets. You get to decide not only where your number stops but also where it begins. It helps you to get started from “1” or another number. But the only rule about this is the order of arguments. The first argument is the start argument and the second is the stop argument.

So let’s see the example where we start our series not from 0 but from 1. Here is a Python range example:

for iterable in range(1, 4):
print(iterable)

#Output
#1
#2
#3

Here we get all the numbers starting from 1. If we provide another one in the first argument in range() we will start from it. But in the only situation if we won’t provide the same number as a stop.

And what happens if we provide the third argument?

Range step Python functions

The answer is here. The third argument means that we will be able to choose not only where the series of numbers stops or starts but also how large the difference needs to be between one number and the next. But if you type nothing, undoubtedly that means that step is “1”.

The only rule is you can not write a negative number or 0 in the step argument, because the traceback error will appear. Here is an example:

range(1,4,0)

#Output:
#ValueError: range() arg 3 must not be zero

So now you can understand the syntax of the range function. Let’s try one more example of step argument usage:

for iterable in range(10,70,10):
division=iterable/10
print(f'{iterable} : 10 = {int(division)}')

#Output:
#10 : 10 = 1
#20 : 10 = 2
#30 : 10 = 3
#40 : 10 = 4
#50 : 10 = 5
#60 : 10 = 6

In this example, we see how to step arguments help us to increase towards a higher number. In Python and other programming languages, this is called “incrementing”.

Incrementing using range() function

As said before, step argument is used for incrementing. But it should be a positive number. The next example will show what the incrementing in Python means:

for iterable in range(3, 100, 25):
print(iterable)

#Output:
#3
#28
#53
#78

Here we can see how the output of our loop has a difference between numbers. We got the range of numbers where every in it was greater than the preceding number by our step argument.

So incrementing means our steps forward. But what about steps backward? Do they exist?

Decrementing using range() function

Incrementing – is when you step forward in the loop. What does decrementing mean? That means that when your step argument is negative, then you move through a series of numbers decreasingly. Or in other words, Python range backwards. In the following example you can see how it works:

for iterable in range(100, -50, -25):
print(iterable)

#Output:
#100
#75
#50
#25
#0
#-25

Here we had seen how our step equals -25 and what output became. That means we had decremented by 25 for each loop. If we provide our step argument 3, we will have a range of numbers that will be each smaller than the preceding number by 3, the value we provided in the third argument in the range() function.

Floats in range()

As you may have noticed, all examples and explanations were made using only whole numbers, or in other words, they are integers. So there is a second restriction after 0. Range() function can not work with float numbers.

If you call range() with a float number the following output will be:

for iterable in range(6.68):
print(iterable)

#Output:
#TypeError: 'float' object cannot be interpreted as an integer

But if you need to work with floats you can use the NumPy library, where there is arange() function.

Reversed() function

After we get to know all styles of range() in Python, we can start to learn the reverse Python function. To create a Python range reverse that decrements we can use range(start, stop, step). But Python 3 has an in-built Python reverse function called reversed(). To make it work, you need to wrap the range() inside Python in reverse. Let’s see the Python code to reverse a number:

for iterable in reversed(range(6)):
print(iterable)

#Output:
#5
#4
#3
#2
#1
#0

Range() allows you to iterate over a decreasing sequence of numbers, while reversed() is typically used to loop through a sequence in reverse order. So that is how you can do reverse array Python or reverse number Python.

Also, the fact is that the reversed() function can work with strings.

Reverse Python list

When we want to create a list reverse Python, we need to use the reversed() function. Our list must be as an argument, reversed returns an iterator that walks through items in reverse order. There is reverse code in Python:

example_list = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

reversed_list = reversed(example_list)
print(reversed_list)
print(list(reversed_list))

#Output:
#<list_reverseiterator object at 0x000001B25276B700>
#[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]

Here we can see how we call reversed() with lists. Firstly we called example_list as an argument in reversed(). Then stored the result in reversed_list. And list() takes the iterator and returns a new list that contains the same items as in example_list but they are reversed. 

Reversed() function with string

To get it work you need both functions reversed() and str.join(). When you pass a string into reversed() you get only an iterator that yields symbols in reverse order. Here is an example:

greeting = reversed('Hello, Kodershop!')
print(next(greeting))
print(next(greeting))
print(next(greeting))

#Output:
#!
#p
#o

For example, before we used next() with our string as an argument, so we get each character from the right side of the original string.

An important thing to note about reversed() is that the resulting iterator outputs characters directly from the original string.  In other words, it does not create a new inverted string but takes characters in the reverse direction of the existing one.  This behavior is efficient when we talk about memory consumption.

To get it to work properly you can reversed() iterator directly as an argument. Here is a case with .join():

greeting = ''
print(greeting.join(reversed("pohsredoK ,olleH")))

#Output:
#Hello, Kodershop

TOP Agile Metrics – How to Correctly Measure Where We Are Moving

TOP Agile Metrics – How to Correctly Measure Where We Are Moving

Agile Metrics

Agile Metrics

Agile’s introduction has changed how businesses operate. Agile has been a game-changer for businesses, and the benefits are clear to see—fewer expenses and shorter time to market are just a couple. But making the changeover alone won’t allow businesses to fully benefit from agile. Agile adoption won’t be very beneficial to a company unless agile metrics are taken into consideration.

Throughout the stages of the software development life cycle, agile metrics are used to track productivity (SDLC). We’ll talk about agile metrics in this post that are crucial for success. We’ll look at their significance to a company.

Agile, however, doesn’t always enable a business to produce high-quality work on schedule. Lean is a method for minimizing unnecessary costs, activities, and labor. Together, lean and agile produce a quick, error-free application that guarantees customer pleasure. Additionally, lean and agile cut out waste that costs a business money. We’ll talk about agile metrics first, then lean metrics and why they’re important. We’ll look at a few lean measures that are crucial for a company’s performance in addition to agile.

Lets have a look on the details.

What are metrics in Agile?

Measuring norms are what metrics are. Agile metrics are benchmarks that assist software teams in tracking their productivity throughout the various stages of the software development life cycle (SDLC).

The development process is not complete without agile metrics. Agile metrics assist organizations or teams using the agile paradigm in evaluating the quality of their product.

Agile metrics assist control team performance by gauging a team’s productivity. If there are any gaps, they are immediately disclosed. With the aid of these metrics, it is simpler to address the flaws because the data and its usage are quantifiable. For instance, velocity metrics can be used to monitor the output of your team.

Why Agile metrics are necessary

Let’s dissect agile metrics’ operation now that we understand what they are. Continuous improvement is the foundation of the agile methodology (CI). You cannot, however, impose this upon teams. It must originate within. Self-improvement (SI) is therefore necessary. It is safe to conclude that CI cannot exist without SI.

Agile includes immediate delivery as a crucial element. But in this instance, SI shouldn’t be disregarded. Teams that practice SI do better than those that don’t. But creating a SI that is effective and sustainable is not simple. It requires a management framework because it is a lengthy procedure. Agile metrics assist SI by monitoring software quality and team performance. These measurements have some direct effects on CI.

A crucial component of agile is producing a high-quality output in addition to enhancing continuity. Finding a balance between these two can be difficult, though. As a result, teams now need criteria by which to gauge their development. Agile metrics help teams become self-managing in the end. They also aid businesses in providing value. At the same time, CI seamlessly integrates into the process.

Why should you include Agile Metrics in your projects?

Agile metrics track several project development parameters. These agile KPIs are crucial for your project. You’ll learn more about the development process thanks to them. They will also make the process of releasing software more generally easier.

1.    Sprint Burndown Report

A scrum framework consists of teams. Sprints are how they structure their procedures. Since a sprint has a deadline, it’s crucial to periodically check on task progress. For tracking the accomplishment of various tasks throughout a sprint, create a sprint burndown report. The two primary units of measurement in this situation are time and the amount of work remaining. The time is indicated on the X-axis. The remaining work is shown on the Y-axis. Hours or tale points are the units of measurement. Before a sprint, the team estimates the workload. By the end of the sprint, the workload should be finished.

2.  Control Chart

Agile control charts concentrate on the amount of time it takes for jobs to move from the “in progress” to the “complete” phase. They serve the function of measuring a single issue’s cycle time. Deliveries from teams with consistent cycle periods are predictable. In addition, teams with quick cycle times produce a lot of work. Teams can increase the flexibility of their processes by measuring cycle times. For instance, when modifications are made, the results are immediately apparent. Team members can then make the necessary changes. Every sprint should generally aim to obtain a quick and reliable cycle time.

3.  Escaped Defects

There is a great deal of unexpected harm when there are flaws in the production process. The team must deal with the issues they raise. When a release goes into production, metrics for escaped bugs aid in bug detection. The software’s quality can be evaluated in its undeveloped state. Look at Plutora’s test management system for additional details on automated defect minimization.

4.   Net Promoter Score

The Net Promoter Score calculates how likely customers are to tell others about a product or service. It is an index with a value between -100 and 100. Customer retention is a crucial aspect of a business’ success. The Net Promoter Score can be used as a stand-in for this objective.

5.  Cumulative Flow Diagram

The cumulative flow diagram (CFD) provides uniformity in the team’s workflow. Time is depicted on the X-axis. The Y-axis represents the number of problems. The diagram should ideally flow smoothly from left to right. If the color bars are flowing unevenly, smooth them out. The band’s shrinking indicates that throughput is greater than entry rate. If the band becomes wider, your workflow capacity is larger than necessary and can be transferred to another location to improve flow.

The CFD calculates the current status of the ongoing work. With it, you can take action to quicken the process. Bottlenecks are clearly represented visually in the diagram. You can examine how bottlenecks developed initially. The team can then take action to get rid of them and improve after that.

6.   Code Coverage

The percentage of code that unit tests cover is measured by code coverage. This measure is available for every build. It displays the raw percentage of code coverage. This metric provides a good perspective on advancement. However, it excludes other forms of testing. As a result, high code coverage numbers don’t always indicate high quality.

7.    Value Delivered

Here, project managers give each need a value. This metric can be expressed in terms of money or points. The main priority should be giving high-value features a chance to be implemented. This metric’s rising trend indicates that things are proceeding as planned. A declining trend, on the other hand, isn’t encouraging. It indicates that lower-value features are currently being implemented. The team needs to make amends if such is the case. You might even need to halt product development on occasion.

8.     Blocked Time

A task receives a blocker sticker thanks to this metric. It indicates that for some reason, the assignee is unable to move on with a specific job because of a dependency. The blocked card on the task board needs to be moved to the right as soon as the dependence is satisfied. To determine the number of blockers, count the quantity and length of blocked cards. You’ll be able to complete your “in progress” task fast if the blocks are removed.

9.    Quality Intelligence

For clarity on software quality, you must use the quality intelligence metric. It aids in locating recent code alterations. Let’s say the team has created new codes, but testing hasn’t been completed yet. Perhaps there are cases where those codes’ quality slips. Good intellect aids in making the same determination. It informs the team when they ought to devote more time to testing.

Agile benefits both customers and staff. In an agile framework, the indicators we covered aid in streamlining the development process. Development is the focus of agile metrics. Lean, however, is the route to go in order to optimize the production processes. Lean and agile both enable teams to provide better value. They also emphasize providing clients with value quickly. Discuss lean metrics now. We’ll learn what they are and talk about a few lean metrics that you absolutely must incorporate into your development cycle.

10.   Work Item Age

For clarity on software quality, you must use the quality intelligence metric. It aids in locating recent code alterations. Let’s say the team has created new codes, but testing hasn’t been completed yet. Perhaps there are cases where those codes’ quality slips. Good intellect aids in making the same determination. It informs the team when they ought to devote more time to testing.

Agile benefits both customers and staff. In an agile framework, the indicators we covered aid in streamlining the development process. Development is the focus of agile metrics. Lean, however, is the route to go in order to optimize the production processes. Lean and agile both enable teams to provide better value. They also emphasize providing clients with value quickly. Discuss lean metrics now. We’ll learn what they are and talk about a few lean metrics that you absolutely must incorporate into your development cycle.

11.    Lead Time 

Lead time is the interval between the time a delivery request is made and the time the goods is actually delivered. Lead time encompasses all steps taken to complete a product. Additionally, it entails creating a business requirement and addressing issues. Lead time is a crucial indicator. This is because it gives the precise time calculation for each procedure.

12.    Failed Deployments  

A useful quality metric is failed deployments. It aids in determining the total number of deployments. Teams can also assess the dependability of the production and testing environments. Additionally, this metric determines when a sprint is prepared to go into production.

13.    Epic and Release Burndown  

Epic and release burndowns concentrate on the wider picture as opposed to a sprint burndown. They monitor progress over a vast corpus of work. In a sprint, there are numerous epics and versions of the work. Therefore, it’s crucial to monitor both their development and each sprint. The workflow within the epic and version must be understood by the entire team. Burndown charts for releases and epics make that possible.

14.    Velocity 

The team’s average work output during a sprint is measured by velocity. In this instance, the report has undergone multiple revisions. The number of iterations affects how accurate the forecast is. The forecast is more accurate the more iterations there are. Hours or tale points are the units of measurement. The ability of a team to work through backlogs is also influenced by velocity. Velocity tends to change throughout time. Tracking velocity is crucial for ensuring reliability. The team needs to make a change if the velocity starts to drop.

 What Are Lean Metrics?

Let’s move on to lean metrics now that we have a thorough understanding of agile metrics. Any company’s best period is when revenues increase. However, as sales rise, the burden also does. The development operation expands concurrently. It is therefore challenging to gauge performance effectiveness. The quality of a product can occasionally be compromised when performance is measured while under a heavy workload. This method needs to be productive and profitable. Lean metrics are thus frequently used by businesses to assess and gauge their performance.

To increase manufacturing efficiency, most businesses concentrate on eliminating waste. But before implementing lean management, it’s crucial to understand how different elements are measured. Lean is all about getting rid of unnecessary items. Lean metrics keep track of and show off advancements in production and management. They oversee and manage the development processes as well. Customers are given ongoing quality as a result. It sort of points out where work needs to be done. This makes changing things easier to implement.

Importance of Lean Metrics

Profitability is the foundation of every business. The key priorities are increasing production and decreasing waste. So how does a company do these? The solution is straightforward: by using lean concepts. Lean operations increase productivity by reducing labor and material waste. Additionally, they raise the level of production at the same time. Maintaining minimal inventories in accordance with demand is part of lean management. Additionally, reducing downtime and transportation losses is necessary.

A good lean operation depends on lean metrics. They support the sustainability of businesses. As lean increases productivity, workers can concentrate on innovation. The finished product is of excellent quality and contains few bugs.

Importance of Lean Metrics

Profitability is the foundation of every business. The key priorities are increasing production and decreasing waste. So how does a company do these? The solution is straightforward: by using lean concepts. Lean operations increase productivity by reducing labor and material waste. Additionally, they raise the level of production at the same time. Maintaining minimal inventories in accordance with demand is part of lean management. Additionally, reducing downtime and transportation losses is necessary.

A good lean operation depends on lean metrics. They support the sustainability of businesses. As lean increases productivity, workers can concentrate on innovation. The finished product is of excellent quality and contains few bugs.

The Missing Metric: Quality Intelligence

We provided a number of potent metrics that offer crucial insights into the agile process. The burndown chart or cycle time, however, are the only metrics that can clearly and effectively give us the most crucial information: “how good” is the program being developed by our developers.

This lacking metric—a clear view of software quality—can be supplied by a new class of tools called Software Quality Intelligence. The following quality metrics are provided by the SeaLights platform, which integrates information on code changes, production uses, and test execution.

  • Analyzing test gaps: Finding places in the code that have recently undergone changes or have been used in production but have not been tested. The ideal location to put money into quality improvement is test gaps.
  • Quality trend intelligence identifies the system components whose quality coverage is improving and those whose quality is declining, indicating the need for further testing time.
  • Release quality analytics: To evaluate the readiness of a release, SeaLights conducts real-time analytics on millions of test runs, code modifications, builds, and production events. Which design offers users the finest quality and is the best build?

Wrapping It Up

Agile metric implementation improves development’s clarity. Because of how popular agile is, businesses adopt it without hesitation. However, the truth is that adopting agile practices alone is insufficient. Bottom line: If agile hasn’t been effectively deployed, stop expecting miracles from it. Agile must be fully implemented in order to have agile metrics. Keep in mind the agile and lean metrics mentioned above if success is truly your goal. Include them in your workflow. It’s very conceivable that in the future you’ll witness your business soar to heights you never thought possible!

gRPC – JSON Transcoding with ASP.NET Core and More – Features and Manual

gRPC – JSON Transcoding with ASP.NET Core and More – Features and Manual

Transcoding JSON to gRPC

Transcoding JSON to gRPC

A powerful Remote Procedure Call (RPC) framework is gRPC. To build high-performance, real-time services, gRPC uses message contracts, Protobuf, streaming, HTTP/2, and those technologies.

The incompatibility of gRPC with all platforms is one of its drawbacks. Because HTTP/2 isn’t completely supported by browsers, REST APIs and JSON are the main methods for supplying data to browser apps. REST APIs and JSON are crucial in current apps, notwithstanding the advantages that gRPC offers. The development of gRPC and JSON Web APIs adds needless complexity.

What is gRPC JSON?

An add-on for ASP.NET Core called gRPC JSON transcoding produces RESTful JSON APIs for gRPC services. Once set up, transcoding enables apps to use well-known HTTP principles to call gRPC services:

  • HTTP verbs
  • URL parameter binding
  • JSON requests/responses

About JSON transcoding implementation

The idea of using gRPC and JSON transcoding is not new. Another tool for converting gRPC services into RESTful JSON APIs is the grpc-gateway. Let’s take a closer look at how it works.
It employs the same. proto annotations for gRPC services that map HTTP concepts. The key distinction is in how each is put into practice.

Code generation is used by grpc-gateway to build a reverse-proxy server. RESTful requests are converted by the reverse-proxy into gRPC+Protobuf and sent via HTTP/2 to the gRPC service.

In my opinion, transcoding has improved and the advantage of this strategy is that the gRPC service is unaware of the RESTful JSON APIs. Grpc-gateway can be used with any gRPC server.

An ASP.NET Core app is doing gRPC JSON transcoding in the background. After converting JSON into Protobuf messages, it directly calls the gRPC service. We think JSON transcoding benefits.NET app developers in a number of ways:

  • Less moving components One ASP.NET Core application manages both mapped RESTful JSON API and gRPC services.
  • Deserializing JSON to Protobuf messages and directly using the gRPC service are both done by JSON transcoding. When compared to initiating a fresh gRPC call to a separate server, performing this in-process offers substantial performance advantages.
  • Smaller monthly hosting bill due to fewer servers.

MVC and Minimal APIs are not replaced by JSON transcoding. It is quite opinionated about how Protobuf maps to JSON and only supports JSON.

Mark up gRPC methods:

Before they support transcoding, gRPC methods need to be annotated with an HTTP rule. The HTTP rule specifies the HTTP method and route as well as details on how to call the gRPC method.

Lets take a look on the example below:

service Greeter {
  rpc SayHello (HelloRequest) returns (HelloReply) {
    option (google.api.http) = {
      get: "/v1/greeter/{name}"
    };
  }
}

The following illustration:

  • Establishes a SayHello method for a Greeter service. The name google.api.http is used to specify an HTTP rule for the method.
  • The GET method is reachable via the /v1/greeter/name> route and GET requests.
  • The request message’s name field is linked to a route parameter.

Streaming techniques:

Streaming in all directions is supported via conventional gRPC over HTTP/2. Only server streaming is permitted without transcoding. Bidirectional streaming methods and client streaming are not supported.

Server streaming techniques employ JSON that is line-delimited. A new line follows each message published using WriteAsync, which is serialized to JSON.

Three messages are written using the server streaming technique below:

public override async Task StreamingFromServer(ExampleRequest request,
IServerStreamWriter<ExampleResponse> responseStream, ServerCallContext context)
{
  for (var i = 1; i <= 3; i++)
{
  await responseStream.WriteAsync(new ExampleResponse { Text = $"Message {i}" });
  await Task.Delay(TimeSpan.FromSeconds(1));
  }
}

Three line-delimited JSON objects are sent to the client:

{"Text":"Message 1"}
{"Text":"Message 2"}
{"Text":"Message 3"}

Please take note that server streaming methods are not covered by the WriteIndented JSON setting. Line-delimited JSON cannot be utilized with pretty printing since it introduces new lines and whitespace.

HTTP protocol

The.NET SDK’s ASP.NET Core gRPC service template builds an app that is only set up for HTTP/2. When an app only supports conventional gRPC over HTTP/2, HTTP/2 is a good default. However, transcoding is compatible with both HTTP/1.1 and HTTP/2. Some platforms, including Unity and UWP, are unable to use HTTP/2. Configure the server to enable HTTP/1.1 and HTTP/2 in order to support all client apps.

Change the appsettings.json default protocol:

{
  "Kestrel": {
  "EndpointDefaults": {
  "Protocols": "Http1AndHttp2"
  }
 }
}

The protocol negotiation required for enabling HTTP/1.1 and HTTP/2 on the same port uses TLS. See ASP.NET Core gRPC protocol negotiation for further details on setting up HTTP protocols in gRPC apps.

gRPC JSON transcoding vs gRPC-Web

The ability to call gRPC services from a browser is provided by both transcoding and gRPC-Web. Let’s take a look at the main differences between the two, everyone goes about it in a different way:

  • With the help of the gRPC-Web client and Protobuf, gRPC-Web enables browser apps to call gRPC services from the browser. The advantage of delivering brief, quick protobuf messages with gRPC-Web is that it necessitates the browser app to construct a protobuf client.
  • Transcoding enables browser-based applications to use gRPC services as if they were JSON-based RESTful APIs. The browser app does not need to create a gRPC client or be familiar with gRPC in any way.

JavaScript APIs in browsers can be used to invoke the prior Greeter service:

{
  "Kestrel": {
  "EndpointDefaults": {
  "Protocols": "Http1AndHttp2"
  }
 }
}

ML.NET – Specialized Machine Learning for the .NET Platform

ML.NET – Specialized Machine Learning for the .NET Platform

ML NET

ML.NET: Machine Learning for .NET

ML.NET is developing into a high-level API and comprehensive framework that not only makes use of its own ML features but also makes other lower-level ML infrastructure libraries and runtimes, like TensorFlow and ONNX, simpler. ML.NET is more than just a machine learning library that offers a specific set of features.

ML.NET, a free, cross-platform, and open-source machine learning framework created to bring the power of ML to .NET applications for a variety of scenarios, including sentiment analysis, price prediction, recommendation, image classification, and more, was publicly unveiled by Microsoft at the Build conference in May 2018. Microsoft released ML.NET 1.0 at Build 2019 a year later, including additional tools and capabilities to make training unique ML models for .NET developers even simpler.

Basics of Machine Learning

Without explicit programming, computers can now make predictions thanks to machine learning. Problems that are challenging (or impossible) to solve using rules-based programming are handled via machine learning (e.g., if statements and for loops). You might not know where to begin, for example, if you were required to develop an application that can determine whether or not a picture contains a dog. Similar to this, you might start by looking at keywords like “long sleeves” and “business casual” if you were asked to design a function that predicts the price of a shirt based on the description of the garment, but you might not know how to build a function to scale it to a few hundred goods.

In order to analyze a large amount of data more quickly and accurately than ever before and to make data-driven decisions more easily, machine learning can automate a wide range of “human” tasks, such as classifying images of dogs and other objects or predicting values, such as the price of a car or house.

Let’s make an overview of ML.NET

 In order to build your own unique ML models, you can use the ML.NET framework. In contrast to “pre-built AI,” which involves using pre-made general AI services from the cloud, this customized approach (like many of the offerings from Azure Cognitive Services). This can be quite effective in many situations, but due to the nature of the machine learning challenge or the deployment context, it might not always meet your unique business demands (cloud vs. on-premises). You can build and train your own ML models using ML.NET, which makes it incredibly flexible and responsive to your particular data and business domain circumstances. You can run ML.NET anywhere because the framework includes libraries and NuGet packages that you can use in your .NET apps.

Who does ML.NET address?

With the aid of ML.NET, developers can quickly incorporate machine learning into practically any.NET application using their existing .NET expertise. This implies that you are no longer required to learn a different programming language, such as Python or R, if C# (or F#, or VB), is your preferred programming language, in order to create your own ML models and incorporate unique machine learning into your.NET projects. It is not necessary to have any prior machine learning experience in order to use the framework’s tooling and features to quickly design, train, and deploy superior bespoke machine learning models directly on your computer.

Is ML.NET free?

You can use ML.NET wherever you wish and in any.NET application because it is a free and open-source framework (similar in autonomy and context to other .NET frameworks like Entity Framework, ASP.NET, or even .NET Core), as shown in Figure 1. This includes desktop applications (WPF, WinForms), web apps and services (ASP.NET MVC, Razor Pages, Blazor, Web API), and more. This indicates that you can create and use ML.NET models both locally and on any cloud, including Microsoft Azure. Additionally, as ML.NET is cross-platform, you may use it with any OS, including Windows, Linux, and macOS. To integrate AI/ML into your current .NET apps, you may even run ML.NET on Windows’s default.NET Framework. You can train and use ML.NET models in offline contexts like desktop applications (WPF and WinForms) or any other offline.NET program according to this rule as well (excluding ARM processors, which are currently not supported). NimbusML Python bindings are another option provided by ML.NET. If your company employs teams of data scientists with a stronger command of Python, they can use NimbusML to construct ML.NET models in Python, which you can then use to create commercial end-user .NET applications relatively quickly while running the models as native .NET.

When utilizing NimbusML to build/train ML.NET models that can run directly in .NET apps, data scientists and Python developers familiar with scikit-learn estimators and transforms as well as other well-known libraries in Python, such as NumPy and Pandas, will feel at ease. Visit https://aka.ms/code-nimbusml to learn more about NimbusML.

You can train and use ML.NET models in offline contexts like desktop applications (WPF and WinForms) or any other offline.NET program according to this rule as well (excluding ARM processors, which are currently not supported).

ML.NET Components

You can utilize the .NET API that ML.NET offers for two different types of actions:

  • Training an ML model involves building the model, typically in your “back storage.”
  • Consumption of ML models: utilizing the model to generate predictions in real-world production end-user apps

Data components:

  • IDataView: An IDataView is a versatile, effective means of representing tabular data in NET (e.g., rows and columns). To load datasets for later processing, the IDataView serves as a placeholder. It is the component that holds the data during data transformations and model training since it is made to handle high-dimensional data and huge data sets quickly. It can handle enormous data sets up to terabytes in size because, in addition to allowing you to load data from a file or enumerable into an IDataView, you can also stream data from the original data source during training. IDataView objects may include text, Booleans, vectors, integers, and other data.
  • Data Loaders: Almost any data source can be used to load datasets into an IDataView. To load and train data directly from any relational database supported by the system, you can use the Database Loader or File Loaders for common ML sources including text, binary, and image files. data from databases like MySQL, PostgreSQL, Oracle, SQL Server, etc.
  • Data Transforms: Since mathematics is the foundation of machine learning, all data must be transformed into numbers or numeric vectors. To transform your data into a format that the ML algorithms can use, ML.NET offers a range of data transforms, including text featurizers and one-hot encoders.

ModelTraining components:

  • Classical ML:NET supports a variety of traditional ML situations and tasks, including time series, regression, classification, and more. You can choose and fine-tune the particular algorithm that achieves higher accuracy and more effectively solves your ML challenge from among the more than 40 trainers (algorithms targeting a certain task) offered by ML.NET.
  • Computer vision: Beginning with ML.NET 1.4-Preview, ML.NET also provides picture-based training tasks (image classification/recognition) with your own custom images. These tasks use TensorFlow as the training engine. Microsoft is attempting to make object detection training supportable as well.

ModelConsumption and Evaluation components:

  • Model consumption:NET offers a variety of ways to make predictions after you’ve trained your custom ML model, including using the model itself to make predictions in bulk, the Prediction Engine to make individual predictions, or the Prediction Engine Pool to make predictions in scalable and multi-threaded applications.
  • Model evaluation: should be evaluated to make sure it produces predictions of the desired caliber before being used in production. Depending on the specific ML task, ML.NET offers several evaluators relevant to each ML task so that you may determine the model’s accuracy as well as many other common machine learning metrics.

Extensions and Tools:

Integration of an ONNX model: ONNX is a standardized and versatile ML model format. Any pre-trained ONNX model can be run and scored using ML.NET.

Integration of the TensorFlow model: TensorFlow is one of the most well-liked deep learning packages. Additionally to the previously specified image classification training scenario, this API allows you to execute and score any pre-trained TensorFlow model.

Tools: To make model training even simpler, you can use ML.NET’s tools (Model Builder in Visual Studio or the cross-platform CLI). To find the best model for your data and scenario, these tools internally experiment with numerous combinations of algorithms and configurations using the ML.NET AutoML API.

Hello ML.NET

Let’s look at some ML.NET code now that you’ve seen an overview of the framework’s many parts and ML.NET itself.

It only takes a few minutes to create your own unique machine learning model with ML.NET. The code in Example 1 exemplifies a straightforward ML.NET application that trains, assesses, and uses a regression model to forecast the cost of a specific taxi ride.

// 1. Initalize ML.NET environment
MLContext mlContext = new MLContext();

// 2. Load training data
IDataView trainData = mlContext.Data.LoadFromTextFile<ModelInput>("taxi-fare-train.csv", separatorChar:',');

// 3. Add data transformations
var dataProcessPipeline = mlContext.Transforms.Categorical.OneHotEncoding(
    outputColumnName:"PaymentTypeEncoded", "PaymentType")
    .Append(mlContext.Transforms.Concatenate(outputColumnName:"Features",
    "PaymentTypeEncoded","PassengerCount","TripTime","TripDistance"));

// 4. Add algorithm
var trainer = mlContext.Regression.Trainers.Sdca(labelColumnName: "FareAmount", featureColumnName: "Features");

var trainingPipeline = dataProcessPipeline.Append(trainer);

// 5. Train model
var model = trainingPipeline.Fit(trainData);

// 6. Evaluate model on test data
IDataView testData = mlContext.Data.LoadFromTextFile<ModelInput>("taxi-fare-test.csv");
IDataView predictions = model.Transform(testData);
var metrics = mlContext.Regression.Evaluate(predictions,"FareAmount");

// 7. Predict on sample data and print results
var input = new ModelInput
{
    PassengerCount = 1,
    TripTime = 1150,
    TripDistance = 4, 
    PaymentType = "CRD"
};
var result = mlContext.Model.CreatePredictionEngine<ModelInput,ModelOutput>(model).Predict(input);
Console.WriteLine($"Predicted fare: {result.FareAmount}\nModel Quality (RSquared): {metrics.RSquared}");

Tools for Automated Machine Learning in ML.NET

Choosing the right data transformations and algorithms for your data and ML situation can be difficult, especially if you don’t have a background in data science. Although writing the code to train ML.NET models is simple, this might be a difficulty if you don’t have that expertise. However, Microsoft has automated the model selection process for you with the preview release of Automated Machine Learning and tooling for ML.NET so that you can immediately get started with machine learning in.NET without needing prior machine learning skills.

The Automated Machine Learning tool in ML.NET, also known as AutoML, operates locally on your development computer and uses a combination of the best algorithm and settings to automatically construct and train models. Simply state the machine learning goal and provide the dataset, and AutoML will select and produce the highest quality model by experimenting with a variety of algorithm combinations and associated algorithm settings.

AutoML now supports regression (for example, price prediction), binary classification (for example, sentiment analysis), and multi-class classification (for example, issue classification). Support for more scenarios is also being developed.

Although the ML.NET AutoML API allows you to utilize AutoML directly, ML.NET also provides tooling on top of AutoML to further simplify machine learning in.NET. You’ll utilize the tools in the following sections to demonstrate how simple it is to build your own ML.NET model.

ML.NET Model Builder

To create ML.NET models and model training and consumption code, you can use the ML.NET Model Builder, a straightforward UI tool in Visual Studio. To select the data transformations, algorithms, and algorithm settings for your data that would result in the best accurate model, the tool internally uses AutoML.

To get a trained machine learning model, you give Model Builder three things:

  • The scenario for machine learning
  • The data set
  • How long do you want to exercise

Let’s try out an example of sentiment analysis. You’ll develop a console application that can determine whether a comment is harmful or not.

Let’s try out an example of sentiment analysis. You’ll develop a console application that can determine whether a comment is harmful or not.

The ML.NET Model Builder Visual Studio extension must first be downloaded and installed. You may do this online here or in Visual Studio’s Extensions Manager. Model Builder works with both Visual Studio 2017 and 2019.

The Wikipedia detox dataset (wikipedia-detox-250-line-data.tsv), which is available as a TSV file must also be downloaded. Each row represents a unique review submitted by a user to Wikipedia (SentimentText), each of which is labeled as hazardous (1/True) or non-toxic (0/False).

TSV file

Data preview of Wikipedia detox dataset

 

Create a new.NET Core Console Application in Visual Studio after installing the extension and downloading the dataset.

Right-click on your project in the Solution Explorer and choose Add > Machine Learning.

NET Core ML instruction

A new tool window with ML.NET Model Builder appears. From here, as shown in the image attached below, you can select from a range of situations. Microsoft is presently trying to bring new ML situations, such as picture classification and recommendation, to Model Builder.

Select the sentiment analysis scenario to learn how to design a model that can predict which of two categories an item belongs in. This ML task is known as binary classification (in this case toxic or non-toxic).

ML.NET Model Builder

You can import data into Model Builder from a database or a file (.txt,.csv, or.tsv) (e.g., SQL Server). Choose File as your input data source in the Data screen, then upload the wikipedia-detox-250-line-data.tsv dataset. Select Sentiment as your Label from the Column to Predict (Label) drop-down, and keep SentimentText checked as the Feature.

ML.NET

Move on to the Train step, where you specify the Time to train (i.e., how long the tool will spend evaluating different models using AutoML). In general, longer training periods allow AutoML to explore more models with multiple algorithms and settings, so larger datasets will need more time to train.

Because the wikipedia-detox-250-line-data.tsv dataset is less than 1MB, change the Time to train to just 20 seconds and select Start training. This starts the AutoML process of iterating through different algorithms and settings to find the best model.

You can watch the training progress, including the time remaining, the best model and model accuracy found so far, and the last model explored in the Model Builder UI,

You’ll see output and evaluation metrics once you get to the Evaluate step. The AveragedPerceptronBinary trainer, a linear classification algorithm that excels in text classification scenarios, was chosen by AutoML as the best model after 20 seconds of exploration of five different models. Its accuracy was 70.83%.

AutoML process

ML.NET CLI

Additionally, ML.NET offers cross-platform tooling so that you can still use AutoML to quickly develop machine learning models even if you don’t work on Windows or with Visual Studio. To create high-quality ML.NET models based on training datasets you supply, you can install and use the ML.NET CLI (command-line interface), a dotnet Global Tool, on any command-prompt (Windows, macOS, or Linux). The ML.NET CLI generates sample C# code to run that model along with the C# code that was used to build and train it, similar to Model Builder, so you can examine the settings and algorithm that AutoML selected.

Roadmap for ML.NET

Even while ML.NET offers a very wide range of machine learning jobs and algorithms, there are still a lot of other intriguing areas Microsoft wishes to expand upon and enhance, such as:

 

  • Deep Learning-based computer vision training scenarios (TensorFlow)
  • Completing the Image Classification training API, adding support for the GPU, and releasing this functionality to GA
  • Developing and making available the Object Detection training API
  • CLI and Model Builder updates:
  • Create scenarios for item detection, recommendation, image classification/recognition, and other ML activities.
  • A thorough model lifecycle, model versioning and registry, surface model analysis, and explainability are all made possible by the integration and support of ML.NET in Azure ML and Azure AutoML.
  • Text analysis using TensorFlow and DNN
  • Other data loaders, like No-SQL databases
  • For inference/scoring in Xamarin mobile apps for iOS, Android, and IoT workloads, ARM support or full ONNX export support is available.
  • Targeting x64/x86 UWP app support, Unity
System.Text.Json in .NET 7 – What`s New? Theory and Practice

System.Text.Json in .NET 7 – What`s New? Theory and Practice

NET System.Text.JSON

System.Text.Json in .NET 7

 System.Text.Json’s has been greatly improved in .NET 7 through the addition of new performance-oriented features and the resolution of critical reliability and consistency issues. Contract customization provides you more control over how types are serialized or deserialized, polymorphic serialization for user-defined type hierarchies, necessary member support, and many other features are all introduced with the release of .NET 7.

Contract Customization

By creating a JSON contract for a given .NET type, System.Text.Json can decide how that type should be serialized and deserialized. The contract is derived either at runtime using reflection or at compile time using the source generator from the type’s shape, which includes the constructors, properties, and fields that are available as well as whether it implements IEnumerable or IDictionary. Users have historically had some limited control over the derived contract through the use of System.Text.Json attribute annotations, provided they have the ability to change the type declaration.

Previously utilized as an opaque token only in source generator APIs, JsonTypeInfo<T> now serves as a representation of the contract metadata for a specific type T. Most aspects of the JsonTypeInfo contract metadata have been made visible and editable for users as with .NET 7. By employing implementations of the IJsonTypeInfoResolver interface, contract customization enables users to create their own JSON contract resolution logic:

public interface IJsonTypeInfoResolver
{
    JsonTypeInfo? GetTypeInfo(Type type, JsonSerializerOptions options);
}

For the specified Type and JsonSerializerOptions combination, a contract resolver returns a JsonTypeInfo object that has been configured. If the resolver does not support metadata for the designated input type, it may return null.

The DefaultJsonTypeInfoResolver class, which implements IJsonTypeInfoResolver, now exposes contract resolution done by the reflection-based default serializer. By using this class, users can add their own specific modifications to the standard reflection-based resolution or mix it with other resolvers (such as source-generated resolvers).

The JsonSerializerContext class, which is used to generate source code, has been implementing IJsonTypeInfoResolver since .NET 7. See How to use source generation in System for additional information on the source generator. Text.Json.

The new TypeInfoResolver field allows a custom resolver to be added to a JsonSerializerOptions instance:

// configure to use reflection contracts
var reflectionOptions = new JsonSerializerOptions 
{ 
  TypeInfoResolver = new DefaultJsonTypeInfoResolver()
};
// configure to use source generated contracts
var sourceGenOptions = new JsonSerializerOptions 
{ 
  TypeInfoResolver = EntryContext.Default 
};
[JsonSerializable(typeof(MyPoco))]
public partial class EntryContext : JsonSerializerContext { }

Type Hierarchies in .NET

The System.Text.Json. class, which is used to generate source code, has been implementing JsonDerivedTypeAttribute since.NET 7. See How to use source generation in System for additional information on the source generator. Text.Json.

The new TypeInfoResolver field allows a custom resolver to be added to a JsonSerializerOptions instance:

[JsonDerivedType(typeof(DerivedClass))]
public class Temp
{
   public int i { get; set; }
}
  public class DerivedClass : Temp
{
  public int j { get; set; }
}

The aforementioned code example makes the Base class’s polymorphic serialization possible.

Base val = new DerivedClass();
JsonSerializer.Serialize(value);

DerivedClass is the run-time type listed above.

Required Members in .NET

The addition of support for needed members came with C# 11. This enables class creators to select which fields, properties, and reflection serialization should be filled in during instantiation. Here is a brief C# code example:

using System.Text.Json;
JsonSerializer.Deserialize("""{"StudentCourse": 7}"""); 
public class Student
{
  public required string StudentName { get; set; }
  public int NumberOfSubjects { get; set; }
}

A JSON exception will be raised as a result of this since the necessary parameter (StudentName) was removed.

JsonSerializerOptions.Default

Developers can now pass and access a read-only instance of JsonSerializerOptions by making use of the JsonSerializerOptions. static default property Here is some code that exemplifies this:

public class Convert : JsonConverter
{
  private readonly static JsonConverter s_default = 
  (JsonConverter)JsonSerializerOptions.Default.GetConverter(typeof(int));
  public override void Write(Utf8JsonWriter w, int val, JsonSerializerOptions o)
    {
      return w.WriteStringValue(val.ToString());
    }
  public override int Read(ref Utf8JsonReader r, Type type, JsonSerializerOptions o)
    {
      return s_default.Read(ref r, t, o);
    }
}

Performance upgrades

The quickest System is released with.NET 7. not yet Text.Json. Both the internal implementation and the user-facing APIs have undergone a number of performance-focused upgrades. for a thorough analysis of System. Please refer to the appropriate section of Stephen Toub’s article, Performance Improvements in.NET 7 for more information on the Text.Json performance improvements.

Closing

This year, we have concentrated on improving System.Text.Json’s extensibility, consistency, and reliability while also committing to yearly performance improvements.

Extension methods in C#: Consideration of Capabilities and Features

Extension methods in C#: Consideration of Capabilities and Features

C# extension methods

How to Use Extension Methods in C#

Programming requires a person to provide a computer with specific instructions, which is typically a time-consuming operation. We can speak with computers thanks to computer programming languages, but doing so necessitates providing them a long list of directives. For many programmers, extension techniques greatly facilitate this procedure. A programmer can utilize an extension method to avoid having to create an entire method from scratch in order to run the program because it already has the basic structure in place.

There are many different ways to use extension methods in C# programming, and there are some situations when users write their own methods to reuse in subsequent programs. You can use some of the most significant extension methods in the.NET Framework in addition to developing your own if you so choose. Your life as a programmer can be made a lot simpler by combining C# and.NET.

What is an Extension Method?

In the realm of computer programming, extension methods are rather prevalent, and whether a programmer utilizes them in.NET or LINQ, they may accomplish a lot of work with them. In reality, C# is the only program—if not the only one—that naturally permits you to use extension methods with a modicum of simplicity.

Simply said, an extension method allows you to extend a data type in your software. This method defines a static method in a static class in the most basic terms. The first parameter in the method is then identified as the data type that is being extended by the method by identifying the “this” that is put in front of it.

An illustration of an extension method in use may be found below.

.public static class Extensions
{
  Public static int OneHalf (this int source)
{
  Return source / 2;
}
}

Take note of a few of the program’s main components. The class is named and identified in the first line of the program, public static class Extensions. Take note of the word “static” in the class’s identification. You must ensure the class is static in order to use an extension method.

Public static int OneHalf is the program’s next line (this int source). There are important things in this section of the software that you should also keep in mind. The method begins with this program, which must be static like the class. Additionally, there is the int data type. OneHalf is an integer when used with this enhanced technique.

How Extension Methods Make Programming Easier

public static class ExtensionInt
{
  public static int OneHalf(this int solution)
  {
    return solution / 2;
  }
    public static int ExponentCube(this int solution)
  {
    return (int)Math.Pow(solution, 3);
  }
    public static int ExponentSquare(this int solution)
  {
    return (int)Math.Pow(source, 2);
  }
}

The aforementioned techniques show how extension methods can be used to simplify programming. View the aforementioned three approaches. They all solve the same mathematical issue. The first technique divides a number by two. The second technique involves multiplying a number by three exponentially. The second approach does the same task, but takes the integer and multiplies it by two instead of increasing it exponentially by three.

Let’s consider an example where you wished to multiply a value exponentially by 3 or cube it. The new result was then multiplied exponentially by two, and the outcome was eventually divided by two. With conventional programming techniques, you would produce a line of code that was exceedingly difficult. Let’s examine how 25 would traditionally be done in this situation.

var answer = ExtensionsInt.ExponentCube(ExtensionsInt.ExponentSquare(ExtensionsInt.OneHalf(25)));

As you can see from the example above, solving a problem like this requires a lot of nesting. You must make sure that everything is neatly nested together and that it is done in the correct order. If you were to make a mistake, you would need to verify each character in this line of code to ensure that everything still makes sense. Many people make the error of not knowing how many parenthesis are involved in a situation like this.

On the other hand, this line of code would be much shorter if you were to use extension methods.

var answer = 25.ExponentCube().ExponentSquare().OneHalf();

The code above accomplishes the same task as the code you previously saw, but it is much shorter and much simpler to comprehend. If the program makes a mistake, it would be simple to spot as it is so obvious what it is doing. This is the strength of extension methods—they provide you the ability to add functionality to a parameter that you otherwise might not have had. When compared to writing programs without extension methods, your programs will perform considerably more smoothly.

Important things to keep in mind about C# extension methods:

When utilizing C# extension methods, keep the following in mind:

  1. Only the static class may contain definitions of extension methods.
  2. Extension methods should always be specified as static methods, but they will change into non-static methods when they are bound to any class or structure.
  3. The binding argument, which should contain the name of the class to which the method must be bound, should be prefixed with the binding keyword. It is the first parameter of an extension method.
  4. Only one binding parameter can be specified for an extension method, and it must be first in the parameter list.
  5. If necessary, a normal parameter can be added to an extension method definition starting from the second position in the parameter list.
  6. By including the extension method’s namespace, extension methods can be utilized anywhere in the application.
  7. Method overriding is not supported by the extension method.
  8. It has to be defined in a static class at the top level.
  9. Fields, properties, and events are excluded.

Advantages:

  1. The main benefit of the extension method is that it avoids using inheritance to add new methods to the existing class.
  2. Without changing the existing class’s source code, new methods can be added to it.
  3. Additionally, it can be used with closed classes.

Closing

Hope this helped you understand what C# extension methods are and how to utilize them. They give us the ability to quickly add additional functionality to an existing type without having to either develop a wrapper for it or inherit from it.

Make careful you don’t misuse it, as you should with other design patterns! Everything has its place. Extension techniques are certainly cool, but they might not

always be the answer to your issue. Therefore, use caution in how and where you choose to utilize them.