.NET 7 Has Already Been Released! What’s New?

.NET 7 Has Already Been Released! What’s New?

DOT NET 7

What’s new on .NET 7

.NET is Microsoft’s framework for creating and developing various apps, just to give you a brief introduction. The fact that this platform is cross-platform, open source, and free is one of its key characteristics.

You may create everything from desktop programs (for different operating systems like as Windows, macOS, or Linux) to mobile applications (for Android or iOs), web apps, IoT software, or even games (using the Unity engine based on C#) with the.NET platform.

The following version of.NET, generally known as dotnet 7, will be made available by Microsoft by the end of 2022.

.NET 7 Enhancements – What’s New in.NET 7?

Faster and lighter applications (Native AOT)

Native AOT (Ahead-of-time) is another of the new features and improvements introduced by Microsoft in .NET 7. After a while, Microsoft’s development was centered on the experimental Native AOT project. Microsoft has decided to bring us a couple of updates to Native AOT, as many of us have been asking for a long time.

For those who are unfamiliar with Native AOT, Ahead-of-time (simply AOT) generates code at compile-time rather than run-time.

Microsoft currently provides ReadyToRun, also known as RTR (client/server applications), and Mono AOT (mobile and WASM applications) for this purpose. Furthermore, Microsoft emphasizes that Native AOT does not replace Mono AOT or WASM.

Native AOT is distinguished by its name: it generates code at compile time, but in Native. Its main advantage, according to Microsoft, is improved performance, particularly in:

  • Time to begin
  • Memory consumption
  • Size of the disk
  • Platform access is restricted.

How does Native AOT work

Microsoft explication: “Applications begin to run as soon as the operating system pages them into memory.” The data structures are designed to run AOT generated code rather than compile new code at runtime. This is how languages such as Go, Swift, and Rust compile. Native AOT works best in environments where startup time is critical.”

Observability

Microsoft also improves support for the cloud-native specification (OpenTelemetry). Although it is still in development in.NET 7, Allow samplers to modify tracestate and Samplers should be allowed to modify tracestate have been added. Here’s an example from Microsoft:

//  ActivityListener Sampling callback
    listener.Sample = (ref ActivityCreationOptions<ActivityContext> activityOptions) =>
    {
        activityOptions = activityOptions with { TraceState = "rojo=00f067aa0ba902b7" };
        return ActivitySamplingResult.AllDataAndRecorded;
    };

Shortened startup time (Write-Xor-Execute)

As we saw at the outset, Microsoft has decided to prioritize performance enhancements, and this is no exception. With Reimplement stubs to improve performance, we have seen an improvement in startup time of up to 10-15%, according to Microsoft.

This is primarily due to a significant decrease in the number of modifications made after code creation at runtime.

Let’s take a look on the benchmarks:

NET 7 win 64
Win x64 Start-Up time Benchmark (Source: Github)
NET 7 Linux x64
Linux Intel x64 Start-Up time Benchmark (Source: Github)
NET 7 Linux AMD x64
Linux AMD x64 Start-Up time Benchmark (Source: Github)

Containers and Cloud Native

The Modern Cloud is another new feature described by Microsoft. As previously stated, cloud-native applications are typically built from the ground up to take advantage of all the resources of containers and databases. This architecture is excellent because it enables easy scaling of applications through the use of autonomous subsystems. This, in turn, lowers long-term costs.

What .NET will bring is the ease of creating cloud-native applications, as well as various improvements to the developer experience — thank you, Microsoft, for thinking of us.

Furthermore, it will greatly simplify the installation and configuration of authentication and authorization systems. And, as always, minor performance enhancements at application runtime.

Containers and Cloud Native

The Modern Cloud is another new feature described by Microsoft. As previously stated, cloud-native applications are typically built from the ground up to take advantage of all the resources of containers and databases. This architecture is excellent because it enables easy scaling of applications through the use of autonomous subsystems. This, in turn, lowers long-term costs.

What. NET will bring is the ease of creating cloud-native applications, as well as various improvements to the developer experience — thank you, Microsoft, for thinking of us.

Furthermore, it will greatly simplify the installation and configuration of authentication and authorization systems. And, as always, minor performance enhancements at application runtime.

It is easier to upgrade .NET applications

Migrating older applications to.NET 6 has not been the easiest thing in the world, as we all know — and some have suffered through it. That is why Microsoft is releasing new upgrades for older applications. The main points are as follows:

  • Additional code analyzers
  • More code checkers are needed.
  • Checkers for compatibility

All of this will be accompanied by the .NET Upgrade Assistant, which will greatly assist in upgrading those applications while saving the developer time — thank you, Microsoft.

Hot Reload has been improved

My favorite.NET 6 feature will be updated in .NET 7. C# Hot Reload will be available in Blazor WebAssembly and .NET for iOS and Android.

Microsoft has also added new features such as:

  • Static lambdas are being added to existing methods.
  • Adding lambdas to existing methods that already have at least one lambda that captures this
  • Adding new non-virtual or static instance methods to existing classes
  • Making new static fields available in existing classes
  • Introducing new classes

API updates and enhancements

System.Text.Json contains a few minor quality-of-life enhancements:

  • Include a JsonSerializerOptions section.
  • Include a MaxDeph property and ensure that it is derived from the corresponding value.
  • Patch methods have been added to Net.Http.Json.

This was not possible in previous versions, but with the additions to System.Text.Json, serialization and deserialization of polymorphic type hierarchies is now possible.

Let’s look at the Microsoft example:

[JsonDerivedType(typeof(Derived))]
public class Base
{
public int X { get; set; }
}
public class Derived : Base
{
public int Y { get; set; }
}

Activity.Current New Standard

The most common way to achieve span context tracking of the different threads being managed in .NET 6 is to use AsyncLocal<T>.

Jeremy Likness stated in his post Announcing .NET 7 Preview 4:

“…because the context is tracked via Activity.Current, it is impossible to set the value changed handler with Activity becoming the standard to represent spans, as used by OpenTelemetry.”

We can now do this with Activity.CurrentChanged to receive notifications. Consider the following Microsoft example:

public partial class Activity : IDisposable
{
public static event EventHandler<ActivityChangedEventArgs>? CurrentChanged;
}

In date/time structures, microseconds and nanoseconds are used

The “tick” was the smallest time increment that could be used, and its value is 100ns. The issue was that in order to determine a value in microseconds or nanoseconds, you had to calculate everything based on the “tick,” which was not the most efficient thing in the world.

According to Microsoft, they will now add microsecond and nanosecond values to the various date and time structures that exist.

Consider the following Microsoft example:

.NET 7 DateTime

namespace System {
public struct DateTime {
public DateTime(int year, int month, int day, int hour, int minute, int second, int millisecond, int microsecond);
public DateTime(int year, int month, int day, int hour, int minute, int second, int millisecond, int microsecond, System.DateTimeKind kind);
public DateTime(int year, int month, int day, int hour, int minute, int second, int millisecond, int microsecond, System.Globalization.Calendar calendar);
public int Microsecond { get; }
public int Nanosecond { get; }
public DateTime AddMicroseconds(double value);
}
}

.NET 7 TimeOnly

namespace System {
public struct TimeOnly {
public TimeOnly(int hour, int minute, int second, int millisecond, int microsecond);
public int Microsecond { get; }
public int Nanosecond { get; }
}
}

Cache with a Single Memory

With the AddMemoryCache API, you can now create a single memory cache and have it injected so that you can call GetCurrentStatistics. Consider the following Microsoft example:

// when using `services.AddMemoryCache(options => options.TrackStatistics = true);` to instantiate
[EventSource(Name = "Microsoft-Extensions-Caching-Memory")] internal sealed class CachingEventSource : EventSource { public CachingEventSource(IMemoryCache memoryCache) { _memoryCache = memoryCache; } protected override void OnEventCommand(EventCommandEventArgs command) { if (command.Command == EventCommand.Enable) { if (_cacheHitsCounter == null) { _cacheHitsCounter = new PollingCounter("cache-hits", this, () => _memoryCache.GetCurrentStatistics().CacheHits) { DisplayName = "Cache hits", }; } } } }

New Tar APIs

We will now have cross-platform APIs for extracting and modifying (reading and writing) tar archives. As is customary, Microsoft has provided examples, so here are a few:

.NET 7 Archive Tar API

// Generates a tar archive where all the entry names are prefixed by the root directory 'SourceDirectory'
TarFile.CreateFromDirectory(sourceDirectoryName: "/home/dotnet/SourceDirectory/", destinationFileName: "/home/dotnet/destination.tar", includeBaseDirectory: true);

.NET 7 Extract Tar API

// Extracts the contents of a tar archive into the specified directory, but avoids overwriting anything found inside
TarFile.ExtractToDirectory(sourceFileName: "/home/dotnet/destination.tar", destinationDirectoryName: "/home/dotnet/DestinationDirectory/", overwriteFiles: false);

 

Code fixer

Of course, you can’t have an Analyzer without a Code Fixer. Well, Microsoft tells us that the first of its two functions (for the time being, let’s keep waiting for more information) is in charge of suggesting RegexGenerator source generator methods with the option of overriding the name that comes by default.

The second function of this .NET 7 code fixer is that it replaces the original code with a call to the new OSR (On Stack Replacement).

OSR (On Stack Replacement) is an excellent addition to tiered compilation. It allows you to change the code that is being executed by the methods that are currently being executed while they are still being executed.

Microsoft claims that:

“OSR allows long-running methods to switch to more optimized versions in the middle of their execution, allowing the runtime to jit all methods quickly at first and then transition to more optimized versions when those methods are called frequently (via tiered compilation) or have long-running loops (via OSR).”

With OSR, we can gain up to 25% more startup speed (Avalonia IL Spy test), and TechEmpower claims that improvements can range from 10% to 30%.

Improved Regex source generator

With the new Regex Source Generator, you can save up to 5 times the time spent optimizing patterns in our compiled engine without sacrificing any of the associated performance benefits.

Furthermore, because it operates within a partial class, there is no compiling-time overhead for instances when users know their pattern at runtime. If your pattern is known at compile time, this generator is recommended instead of our traditional compiler-based approach.

All you have to do is create a partial declaration with an attribute called RegexGenerator that points back to the method that returns a precompiled regular expression object (with all other features enabled). Our generator will create that method and update it as needed based on changes made to either the original string or passed in options (such as case sensitivity, etc…).

Consider the following comparison of the Microsoft example:

Regex source generator for.NET 6

public class Foo
{
public Regex regex = new Regex(@"abc|def", RegexOptions.IgnoreCase);
public bool Bar(string input)
{
bool isMatch = regex.IsMatch(input);
// ..
}
}

.NET 6 Regex source generator

public partial class Foo // <-- Make the class a partial class
{
[RegexGenerator(@"abc|def", RegexOptions.IgnoreCase)] // <-- Add the RegexGenerator attribute and pass in your pattern and options

public static partial Regex MyRegex(); // <-- Declare the partial method, which will be implemented by the source generator
public bool Bar(string input)
{
bool isMatch = MyRegex().IsMatch(input); // <-- Use the generated engine by invoking the partial method.
// ..
}
}

Improvements made on SDK

Using dotnet new will be much easier for those who work with the .NET Framework. With major updates like improved intuitiveness and faster tab completion, it’s difficult to find anything negative about this change.

Added new command names

Command lines in general were changing — specifically, no longer will every command shown in this output include the —prefix, as it does now. This was done to match what a user would expect from a subcommand in a command-line app.

The old versions of these commands (—install, etc.) are still available in case they break scripts, but we hope that one day those commands will include deprecation warnings so you can transition over without risk.

Tab key completion

The dotnet command line interface has long supported tab completion on shells such as PowerShell, bash, zsh, and fish. When given input, the commands can choose what they want to display.

In .NET 7, the newcommand now supports a wide range of functions:

dotnet new angular

angular

blazorserver

web

blazorwasm

classlib

console

editorconfig

gitignore 

globaljson

grpc

mstest

wpf

mvc

nugetconfig

nunit

nunit-test

page

page

razor

razorclasslib

/?

razorcomponent

react

reactredux

sln 

tool-manifest       

viewimports

viewstart

webapi

webapp

webconfig

winforms

winformscontrollib

winformslib

worker

wpfcustomcontrollib

wpflib

wpfusercontrollib

xunit

–help

-?

-h

/h

install

list

search

uninstall

update

If you want to know how to enable it, I recommend you read the Microsoft guide. Similarly, if you want to learn about all of the features available, I recommend returning to the original source.

Dynamic PGO improvements

Microsoft recently unveiled a new advancement in program optimization. The Dynamic PGO is intended to make some significant changes to the Static PGO that we are already familiar with. Whereas Static PGO requires developers to use special tools in addition to training, Dynamic PGO does not; all you need to do is run the application you want to optimize and then collect data for Microsoft!

According to Andy Ayers on GitHub, the following has been added:

 

“Extend local morph ref counting so that we can determine

single-definition, single-use locals.”

“Add a phase that runs immediately after local morph and attempts to

When single-def single-use local defs are present, they are forwarded to uses.

adjacent assertions.”

Better Performance System.Reflection

To begin with the performance enhancements of the new .NET 7 features, we have the enhancement of the System.Reflection namespace. The System.Reflection namespace is responsible for containing and storing types by metadata, to facilitate the retrieval of stored information from modules, members, assemblies, and other places.

They are mostly used for manipulating instances of loaded types, and they make it simple to create types dynamically.

Microsoft’s update has significantly reduced the overhead when invoking a member using reflection in.NET 7. In terms of numbers, the most recent Microsoft benchmark (created with the package BenchmarkDotNet) shows up to 3-4x faster performance.

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Reflection;
namespace ReflectionBenchmarks
{
internal class Program
{
static void Main(string[] args)
{
BenchmarkRunner.Run<InvokeTest>();
}
}
public class InvokeTest
{
private MethodInfo? _method;
private object[] _args = new object[1] { 42 };
[GlobalSetup]
public void Setup()
{
_method = typeof(InvokeTest).GetMethod(nameof(InvokeMe), BindingFlags.Public | BindingFlags.Static)!;
}
 [Benchmark]
// *** This went from ~116ns to ~39ns or 3x (66%) faster.***
public void InvokeSimpleMethod() => _method!.Invoke(obj: null, new object[] { 42 });
 [Benchmark]
// *** This went from ~106ns to ~26ns or 4x (75%) faster. ***
public void InvokeSimpleMethodWithCachedArgs() => _method!.Invoke(obj: null, _args);
public static int InvokeMe(int i) => i;
}
}

Had Not Attempted Reverse Range in Python? Grasp Some Functions so They Help You

Had Not Attempted Reverse Range in Python? Grasp Some Functions so They Help You

Python reverse range

Reverse Range in Python

Before we begin knowing reverse range in Python, we need to see how the loop in Python works. Knowing Python for in range function is very useful in all reverse code Python. So there is an instance:

sentence = ['Harry', 'is', 'happy!']
 
for iterable in sentence:
    print(iterable)
    
#Output:
#Harry
#is
#happy!

A loop is used for a step-by-step walkthrough. For example: walking through a list, string or array, etc. Python has a very easy style of the loop, (not like C-style: for (i=0; i<n; i++)), and there is a “for in” loop similar to the for loop in other languages. 

In our example we can clearly see the syntax, we have ‘iterable’ that we use to walk through our ‘sentence’. Iterable takes 0 at start, which means the first element in the array. And then 1 and 2. That’s how we receive the output: ‘Harry is happy!’.

Here let`s see one more example before we can reverse the list in Python:

received_data = [15, 6, 2, 12, 54]
result = []

for number in received_data:
temp = number / 3
result.append(temp)

print(result)
#Output:
#[5.0, 2.0, 0.6666666666666666, 4.0, 18.0]

So our loop is used to make a new list ‘result’ that has elements divided by three. Our iterable now calls ‘number’ because it takes elements in ‘received_data’ and then one-by-one divides and appends to the new list.

What is for range Python`s function?

To start this we need to start with its history. The history starts with Python 2. Python 2 was a good version of Python, but now some of its decisions look very weird. For example, one of them is for the Python range. The range in Python 2 is not the same as Python for range() in Python 3. Python 3 range() method is equivalent to Python 2 xrange() method but not range(). In Python 3 most functions return iterable objects but not lists as in Python 2. That was in order to save memory. Some of those are zip() filter() map() including .keys .values .items() dictionary methods.

So the xrange() method in Python 2 was more effective in creating a huge listing of reverse numbers in Python, and that’s why now we have it as an inbuilt range() function in Python 3.

dictionary methods.

So the xrange() method in Python 2 was more effective in creating a huge listing of reverse number in Python, and that’s why now we have it as an inbuilt range() function in Python 3.

How do we use the range() function?

Python a range() means that you can create a series of numbers in the given range. Passing arguments to the function helps you to decide where that series of numbers can begin and where they can end, the same as how big the difference will be between the first number and the next one.

Let’s see a Python print range() code example:

for iterable in range(10,70,10):
division=iterable/10
print(f'{iterable} : 10 = {int(division)}')

In this for loop, thanks to range() and ‘for’ functions we simply created a range Python of numbers that are divisible by 10, so you didn’t have to provide each of them yourself.

So as we have seen in the example, the inbuilt Python in range function is usually frowned upon to be used often in for-loops.

Range() is a good choice when you want to create iterables of numbers, but actually is not great when you iterate over some data. If you want to do that it will be better to loop over it using the “in” operator.

sentence = ['Harry', 'is', 'happy!']

for iterable in sentence:
print(iterable)

#Output:
#Harry
#is
#happy!

There are three types of the syntax of range() functions:

  • When range takes only stop argument: range(stop)
  • When range takes to start and stop arguments: range (start, stop)
  • And when you can use steps: range(start, stop, step)

Stop argument in range()

When you type only one argument in range, that means that you want to stop your series of numbers in that argument you typed. In other words, you will get a series of numbers that start from 0 and include all numbers in the row but not include the number that you provided as an argument.

Let’s see the example of range 1 to 10 python:

for iterable in range(10):
print(iterable)

#Output:
#0
#1
#2
#3
#4
#5
#6
#7
#8
#9
#10

And here we see that we had got all numbers but not our argument that we typed in. So that means “4” here is our stop argument that is not included.

Start argument in range()

When you need to start an argument one of the ways is to type two arguments in brackets. You get to decide not only where your number stops but also where it begins. It helps you to get started from “1” or another number. But the only rule about this is the order of arguments. The first argument is the start argument and the second is the stop argument.

So let’s see the example where we start our series not from 0 but from 1. Here is a Python range example:

for iterable in range(1, 4):
print(iterable)

#Output
#1
#2
#3

Here we get all the numbers starting from 1. If we provide another one in the first argument in range() we will start from it. But in the only situation if we won’t provide the same number as a stop.

And what happens if we provide the third argument?

Range step Python functions

The answer is here. The third argument means that we will be able to choose not only where the series of numbers stops or starts but also how large the difference needs to be between one number and the next. But if you type nothing, undoubtedly that means that step is “1”.

The only rule is you can not write a negative number or 0 in the step argument, because the traceback error will appear. Here is an example:

range(1,4,0)

#Output:
#ValueError: range() arg 3 must not be zero

So now you can understand the syntax of the range function. Let’s try one more example of step argument usage:

for iterable in range(10,70,10):
division=iterable/10
print(f'{iterable} : 10 = {int(division)}')

#Output:
#10 : 10 = 1
#20 : 10 = 2
#30 : 10 = 3
#40 : 10 = 4
#50 : 10 = 5
#60 : 10 = 6

In this example, we see how to step arguments help us to increase towards a higher number. In Python and other programming languages, this is called “incrementing”.

Incrementing using range() function

As said before, step argument is used for incrementing. But it should be a positive number. The next example will show what the incrementing in Python means:

for iterable in range(3, 100, 25):
print(iterable)

#Output:
#3
#28
#53
#78

Here we can see how the output of our loop has a difference between numbers. We got the range of numbers where every in it was greater than the preceding number by our step argument.

So incrementing means our steps forward. But what about steps backward? Do they exist?

Decrementing using range() function

Incrementing – is when you step forward in the loop. What does decrementing mean? That means that when your step argument is negative, then you move through a series of numbers decreasingly. Or in other words, Python range backwards. In the following example you can see how it works:

for iterable in range(100, -50, -25):
print(iterable)

#Output:
#100
#75
#50
#25
#0
#-25

Here we had seen how our step equals -25 and what output became. That means we had decremented by 25 for each loop. If we provide our step argument 3, we will have a range of numbers that will be each smaller than the preceding number by 3, the value we provided in the third argument in the range() function.

Floats in range()

As you may have noticed, all examples and explanations were made using only whole numbers, or in other words, they are integers. So there is a second restriction after 0. Range() function can not work with float numbers.

If you call range() with a float number the following output will be:

for iterable in range(6.68):
print(iterable)

#Output:
#TypeError: 'float' object cannot be interpreted as an integer

But if you need to work with floats you can use the NumPy library, where there is arange() function.

Reversed() function

After we get to know all styles of range() in Python, we can start to learn the reverse Python function. To create a Python range reverse that decrements we can use range(start, stop, step). But Python 3 has an in-built Python reverse function called reversed(). To make it work, you need to wrap the range() inside Python in reverse. Let’s see the Python code to reverse a number:

for iterable in reversed(range(6)):
print(iterable)

#Output:
#5
#4
#3
#2
#1
#0

Range() allows you to iterate over a decreasing sequence of numbers, while reversed() is typically used to loop through a sequence in reverse order. So that is how you can do reverse array Python or reverse number Python.

Also, the fact is that the reversed() function can work with strings.

Reverse Python list

When we want to create a list reverse Python, we need to use the reversed() function. Our list must be as an argument, reversed returns an iterator that walks through items in reverse order. There is reverse code in Python:

example_list = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

reversed_list = reversed(example_list)
print(reversed_list)
print(list(reversed_list))

#Output:
#<list_reverseiterator object at 0x000001B25276B700>
#[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]

Here we can see how we call reversed() with lists. Firstly we called example_list as an argument in reversed(). Then stored the result in reversed_list. And list() takes the iterator and returns a new list that contains the same items as in example_list but they are reversed. 

Reversed() function with string

To get it work you need both functions reversed() and str.join(). When you pass a string into reversed() you get only an iterator that yields symbols in reverse order. Here is an example:

greeting = reversed('Hello, Kodershop!')
print(next(greeting))
print(next(greeting))
print(next(greeting))

#Output:
#!
#p
#o

For example, before we used next() with our string as an argument, so we get each character from the right side of the original string.

An important thing to note about reversed() is that the resulting iterator outputs characters directly from the original string.  In other words, it does not create a new inverted string but takes characters in the reverse direction of the existing one.  This behavior is efficient when we talk about memory consumption.

To get it to work properly you can reversed() iterator directly as an argument. Here is a case with .join():

greeting = ''
print(greeting.join(reversed("pohsredoK ,olleH")))

#Output:
#Hello, Kodershop

TOP Agile Metrics – How to Correctly Measure Where We Are Moving

TOP Agile Metrics – How to Correctly Measure Where We Are Moving

Agile Metrics

Agile Metrics

Agile’s introduction has changed how businesses operate. Agile has been a game-changer for businesses, and the benefits are clear to see—fewer expenses and shorter time to market are just a couple. But making the changeover alone won’t allow businesses to fully benefit from agile. Agile adoption won’t be very beneficial to a company unless agile metrics are taken into consideration.

Throughout the stages of the software development life cycle, agile metrics are used to track productivity (SDLC). We’ll talk about agile metrics in this post that are crucial for success. We’ll look at their significance to a company.

Agile, however, doesn’t always enable a business to produce high-quality work on schedule. Lean is a method for minimizing unnecessary costs, activities, and labor. Together, lean and agile produce a quick, error-free application that guarantees customer pleasure. Additionally, lean and agile cut out waste that costs a business money. We’ll talk about agile metrics first, then lean metrics and why they’re important. We’ll look at a few lean measures that are crucial for a company’s performance in addition to agile.

Lets have a look on the details.

What are metrics in Agile?

Measuring norms are what metrics are. Agile metrics are benchmarks that assist software teams in tracking their productivity throughout the various stages of the software development life cycle (SDLC).

The development process is not complete without agile metrics. Agile metrics assist organizations or teams using the agile paradigm in evaluating the quality of their product.

Agile metrics assist control team performance by gauging a team’s productivity. If there are any gaps, they are immediately disclosed. With the aid of these metrics, it is simpler to address the flaws because the data and its usage are quantifiable. For instance, velocity metrics can be used to monitor the output of your team.

Why Agile metrics are necessary

Let’s dissect agile metrics’ operation now that we understand what they are. Continuous improvement is the foundation of the agile methodology (CI). You cannot, however, impose this upon teams. It must originate within. Self-improvement (SI) is therefore necessary. It is safe to conclude that CI cannot exist without SI.

Agile includes immediate delivery as a crucial element. But in this instance, SI shouldn’t be disregarded. Teams that practice SI do better than those that don’t. But creating a SI that is effective and sustainable is not simple. It requires a management framework because it is a lengthy procedure. Agile metrics assist SI by monitoring software quality and team performance. These measurements have some direct effects on CI.

A crucial component of agile is producing a high-quality output in addition to enhancing continuity. Finding a balance between these two can be difficult, though. As a result, teams now need criteria by which to gauge their development. Agile metrics help teams become self-managing in the end. They also aid businesses in providing value. At the same time, CI seamlessly integrates into the process.

Why should you include Agile Metrics in your projects?

Agile metrics track several project development parameters. These agile KPIs are crucial for your project. You’ll learn more about the development process thanks to them. They will also make the process of releasing software more generally easier.

1.    Sprint Burndown Report

A scrum framework consists of teams. Sprints are how they structure their procedures. Since a sprint has a deadline, it’s crucial to periodically check on task progress. For tracking the accomplishment of various tasks throughout a sprint, create a sprint burndown report. The two primary units of measurement in this situation are time and the amount of work remaining. The time is indicated on the X-axis. The remaining work is shown on the Y-axis. Hours or tale points are the units of measurement. Before a sprint, the team estimates the workload. By the end of the sprint, the workload should be finished.

2.  Control Chart

Agile control charts concentrate on the amount of time it takes for jobs to move from the “in progress” to the “complete” phase. They serve the function of measuring a single issue’s cycle time. Deliveries from teams with consistent cycle periods are predictable. In addition, teams with quick cycle times produce a lot of work. Teams can increase the flexibility of their processes by measuring cycle times. For instance, when modifications are made, the results are immediately apparent. Team members can then make the necessary changes. Every sprint should generally aim to obtain a quick and reliable cycle time.

3.  Escaped Defects

There is a great deal of unexpected harm when there are flaws in the production process. The team must deal with the issues they raise. When a release goes into production, metrics for escaped bugs aid in bug detection. The software’s quality can be evaluated in its undeveloped state. Look at Plutora’s test management system for additional details on automated defect minimization.

4.   Net Promoter Score

The Net Promoter Score calculates how likely customers are to tell others about a product or service. It is an index with a value between -100 and 100. Customer retention is a crucial aspect of a business’ success. The Net Promoter Score can be used as a stand-in for this objective.

5.  Cumulative Flow Diagram

The cumulative flow diagram (CFD) provides uniformity in the team’s workflow. Time is depicted on the X-axis. The Y-axis represents the number of problems. The diagram should ideally flow smoothly from left to right. If the color bars are flowing unevenly, smooth them out. The band’s shrinking indicates that throughput is greater than entry rate. If the band becomes wider, your workflow capacity is larger than necessary and can be transferred to another location to improve flow.

The CFD calculates the current status of the ongoing work. With it, you can take action to quicken the process. Bottlenecks are clearly represented visually in the diagram. You can examine how bottlenecks developed initially. The team can then take action to get rid of them and improve after that.

6.   Code Coverage

The percentage of code that unit tests cover is measured by code coverage. This measure is available for every build. It displays the raw percentage of code coverage. This metric provides a good perspective on advancement. However, it excludes other forms of testing. As a result, high code coverage numbers don’t always indicate high quality.

7.    Value Delivered

Here, project managers give each need a value. This metric can be expressed in terms of money or points. The main priority should be giving high-value features a chance to be implemented. This metric’s rising trend indicates that things are proceeding as planned. A declining trend, on the other hand, isn’t encouraging. It indicates that lower-value features are currently being implemented. The team needs to make amends if such is the case. You might even need to halt product development on occasion.

8.     Blocked Time

A task receives a blocker sticker thanks to this metric. It indicates that for some reason, the assignee is unable to move on with a specific job because of a dependency. The blocked card on the task board needs to be moved to the right as soon as the dependence is satisfied. To determine the number of blockers, count the quantity and length of blocked cards. You’ll be able to complete your “in progress” task fast if the blocks are removed.

9.    Quality Intelligence

For clarity on software quality, you must use the quality intelligence metric. It aids in locating recent code alterations. Let’s say the team has created new codes, but testing hasn’t been completed yet. Perhaps there are cases where those codes’ quality slips. Good intellect aids in making the same determination. It informs the team when they ought to devote more time to testing.

Agile benefits both customers and staff. In an agile framework, the indicators we covered aid in streamlining the development process. Development is the focus of agile metrics. Lean, however, is the route to go in order to optimize the production processes. Lean and agile both enable teams to provide better value. They also emphasize providing clients with value quickly. Discuss lean metrics now. We’ll learn what they are and talk about a few lean metrics that you absolutely must incorporate into your development cycle.

10.   Work Item Age

For clarity on software quality, you must use the quality intelligence metric. It aids in locating recent code alterations. Let’s say the team has created new codes, but testing hasn’t been completed yet. Perhaps there are cases where those codes’ quality slips. Good intellect aids in making the same determination. It informs the team when they ought to devote more time to testing.

Agile benefits both customers and staff. In an agile framework, the indicators we covered aid in streamlining the development process. Development is the focus of agile metrics. Lean, however, is the route to go in order to optimize the production processes. Lean and agile both enable teams to provide better value. They also emphasize providing clients with value quickly. Discuss lean metrics now. We’ll learn what they are and talk about a few lean metrics that you absolutely must incorporate into your development cycle.

11.    Lead Time 

Lead time is the interval between the time a delivery request is made and the time the goods is actually delivered. Lead time encompasses all steps taken to complete a product. Additionally, it entails creating a business requirement and addressing issues. Lead time is a crucial indicator. This is because it gives the precise time calculation for each procedure.

12.    Failed Deployments  

A useful quality metric is failed deployments. It aids in determining the total number of deployments. Teams can also assess the dependability of the production and testing environments. Additionally, this metric determines when a sprint is prepared to go into production.

13.    Epic and Release Burndown  

Epic and release burndowns concentrate on the wider picture as opposed to a sprint burndown. They monitor progress over a vast corpus of work. In a sprint, there are numerous epics and versions of the work. Therefore, it’s crucial to monitor both their development and each sprint. The workflow within the epic and version must be understood by the entire team. Burndown charts for releases and epics make that possible.

14.    Velocity 

The team’s average work output during a sprint is measured by velocity. In this instance, the report has undergone multiple revisions. The number of iterations affects how accurate the forecast is. The forecast is more accurate the more iterations there are. Hours or tale points are the units of measurement. The ability of a team to work through backlogs is also influenced by velocity. Velocity tends to change throughout time. Tracking velocity is crucial for ensuring reliability. The team needs to make a change if the velocity starts to drop.

 What Are Lean Metrics?

Let’s move on to lean metrics now that we have a thorough understanding of agile metrics. Any company’s best period is when revenues increase. However, as sales rise, the burden also does. The development operation expands concurrently. It is therefore challenging to gauge performance effectiveness. The quality of a product can occasionally be compromised when performance is measured while under a heavy workload. This method needs to be productive and profitable. Lean metrics are thus frequently used by businesses to assess and gauge their performance.

To increase manufacturing efficiency, most businesses concentrate on eliminating waste. But before implementing lean management, it’s crucial to understand how different elements are measured. Lean is all about getting rid of unnecessary items. Lean metrics keep track of and show off advancements in production and management. They oversee and manage the development processes as well. Customers are given ongoing quality as a result. It sort of points out where work needs to be done. This makes changing things easier to implement.

Importance of Lean Metrics

Profitability is the foundation of every business. The key priorities are increasing production and decreasing waste. So how does a company do these? The solution is straightforward: by using lean concepts. Lean operations increase productivity by reducing labor and material waste. Additionally, they raise the level of production at the same time. Maintaining minimal inventories in accordance with demand is part of lean management. Additionally, reducing downtime and transportation losses is necessary.

A good lean operation depends on lean metrics. They support the sustainability of businesses. As lean increases productivity, workers can concentrate on innovation. The finished product is of excellent quality and contains few bugs.

Importance of Lean Metrics

Profitability is the foundation of every business. The key priorities are increasing production and decreasing waste. So how does a company do these? The solution is straightforward: by using lean concepts. Lean operations increase productivity by reducing labor and material waste. Additionally, they raise the level of production at the same time. Maintaining minimal inventories in accordance with demand is part of lean management. Additionally, reducing downtime and transportation losses is necessary.

A good lean operation depends on lean metrics. They support the sustainability of businesses. As lean increases productivity, workers can concentrate on innovation. The finished product is of excellent quality and contains few bugs.

The Missing Metric: Quality Intelligence

We provided a number of potent metrics that offer crucial insights into the agile process. The burndown chart or cycle time, however, are the only metrics that can clearly and effectively give us the most crucial information: “how good” is the program being developed by our developers.

This lacking metric—a clear view of software quality—can be supplied by a new class of tools called Software Quality Intelligence. The following quality metrics are provided by the SeaLights platform, which integrates information on code changes, production uses, and test execution.

  • Analyzing test gaps: Finding places in the code that have recently undergone changes or have been used in production but have not been tested. The ideal location to put money into quality improvement is test gaps.
  • Quality trend intelligence identifies the system components whose quality coverage is improving and those whose quality is declining, indicating the need for further testing time.
  • Release quality analytics: To evaluate the readiness of a release, SeaLights conducts real-time analytics on millions of test runs, code modifications, builds, and production events. Which design offers users the finest quality and is the best build?

Wrapping It Up

Agile metric implementation improves development’s clarity. Because of how popular agile is, businesses adopt it without hesitation. However, the truth is that adopting agile practices alone is insufficient. Bottom line: If agile hasn’t been effectively deployed, stop expecting miracles from it. Agile must be fully implemented in order to have agile metrics. Keep in mind the agile and lean metrics mentioned above if success is truly your goal. Include them in your workflow. It’s very conceivable that in the future you’ll witness your business soar to heights you never thought possible!

gRPC – JSON Transcoding with ASP.NET Core and More – Features and Manual

gRPC – JSON Transcoding with ASP.NET Core and More – Features and Manual

Transcoding JSON to gRPC

Transcoding JSON to gRPC

A powerful Remote Procedure Call (RPC) framework is gRPC. To build high-performance, real-time services, gRPC uses message contracts, Protobuf, streaming, HTTP/2, and those technologies.

The incompatibility of gRPC with all platforms is one of its drawbacks. Because HTTP/2 isn’t completely supported by browsers, REST APIs and JSON are the main methods for supplying data to browser apps. REST APIs and JSON are crucial in current apps, notwithstanding the advantages that gRPC offers. The development of gRPC and JSON Web APIs adds needless complexity.

What is gRPC JSON?

An add-on for ASP.NET Core called gRPC JSON transcoding produces RESTful JSON APIs for gRPC services. Once set up, transcoding enables apps to use well-known HTTP principles to call gRPC services:

  • HTTP verbs
  • URL parameter binding
  • JSON requests/responses

About JSON transcoding implementation

The idea of using gRPC and JSON transcoding is not new. Another tool for converting gRPC services into RESTful JSON APIs is the grpc-gateway. Let’s take a closer look at how it works.
It employs the same. proto annotations for gRPC services that map HTTP concepts. The key distinction is in how each is put into practice.

Code generation is used by grpc-gateway to build a reverse-proxy server. RESTful requests are converted by the reverse-proxy into gRPC+Protobuf and sent via HTTP/2 to the gRPC service.

In my opinion, transcoding has improved and the advantage of this strategy is that the gRPC service is unaware of the RESTful JSON APIs. Grpc-gateway can be used with any gRPC server.

An ASP.NET Core app is doing gRPC JSON transcoding in the background. After converting JSON into Protobuf messages, it directly calls the gRPC service. We think JSON transcoding benefits.NET app developers in a number of ways:

  • Less moving components One ASP.NET Core application manages both mapped RESTful JSON API and gRPC services.
  • Deserializing JSON to Protobuf messages and directly using the gRPC service are both done by JSON transcoding. When compared to initiating a fresh gRPC call to a separate server, performing this in-process offers substantial performance advantages.
  • Smaller monthly hosting bill due to fewer servers.

MVC and Minimal APIs are not replaced by JSON transcoding. It is quite opinionated about how Protobuf maps to JSON and only supports JSON.

Mark up gRPC methods:

Before they support transcoding, gRPC methods need to be annotated with an HTTP rule. The HTTP rule specifies the HTTP method and route as well as details on how to call the gRPC method.

Lets take a look on the example below:

service Greeter {
  rpc SayHello (HelloRequest) returns (HelloReply) {
    option (google.api.http) = {
      get: "/v1/greeter/{name}"
    };
  }
}

The following illustration:

  • Establishes a SayHello method for a Greeter service. The name google.api.http is used to specify an HTTP rule for the method.
  • The GET method is reachable via the /v1/greeter/name> route and GET requests.
  • The request message’s name field is linked to a route parameter.

Streaming techniques:

Streaming in all directions is supported via conventional gRPC over HTTP/2. Only server streaming is permitted without transcoding. Bidirectional streaming methods and client streaming are not supported.

Server streaming techniques employ JSON that is line-delimited. A new line follows each message published using WriteAsync, which is serialized to JSON.

Three messages are written using the server streaming technique below:

public override async Task StreamingFromServer(ExampleRequest request,
IServerStreamWriter<ExampleResponse> responseStream, ServerCallContext context)
{
  for (var i = 1; i <= 3; i++)
{
  await responseStream.WriteAsync(new ExampleResponse { Text = $"Message {i}" });
  await Task.Delay(TimeSpan.FromSeconds(1));
  }
}

Three line-delimited JSON objects are sent to the client:

{"Text":"Message 1"}
{"Text":"Message 2"}
{"Text":"Message 3"}

Please take note that server streaming methods are not covered by the WriteIndented JSON setting. Line-delimited JSON cannot be utilized with pretty printing since it introduces new lines and whitespace.

HTTP protocol

The.NET SDK’s ASP.NET Core gRPC service template builds an app that is only set up for HTTP/2. When an app only supports conventional gRPC over HTTP/2, HTTP/2 is a good default. However, transcoding is compatible with both HTTP/1.1 and HTTP/2. Some platforms, including Unity and UWP, are unable to use HTTP/2. Configure the server to enable HTTP/1.1 and HTTP/2 in order to support all client apps.

Change the appsettings.json default protocol:

{
  "Kestrel": {
  "EndpointDefaults": {
  "Protocols": "Http1AndHttp2"
  }
 }
}

The protocol negotiation required for enabling HTTP/1.1 and HTTP/2 on the same port uses TLS. See ASP.NET Core gRPC protocol negotiation for further details on setting up HTTP protocols in gRPC apps.

gRPC JSON transcoding vs gRPC-Web

The ability to call gRPC services from a browser is provided by both transcoding and gRPC-Web. Let’s take a look at the main differences between the two, everyone goes about it in a different way:

  • With the help of the gRPC-Web client and Protobuf, gRPC-Web enables browser apps to call gRPC services from the browser. The advantage of delivering brief, quick protobuf messages with gRPC-Web is that it necessitates the browser app to construct a protobuf client.
  • Transcoding enables browser-based applications to use gRPC services as if they were JSON-based RESTful APIs. The browser app does not need to create a gRPC client or be familiar with gRPC in any way.

JavaScript APIs in browsers can be used to invoke the prior Greeter service:

{
  "Kestrel": {
  "EndpointDefaults": {
  "Protocols": "Http1AndHttp2"
  }
 }
}

ML.NET – Specialized Machine Learning for the .NET Platform

ML.NET – Specialized Machine Learning for the .NET Platform

ML NET

ML.NET: Machine Learning for .NET

ML.NET is developing into a high-level API and comprehensive framework that not only makes use of its own ML features but also makes other lower-level ML infrastructure libraries and runtimes, like TensorFlow and ONNX, simpler. ML.NET is more than just a machine learning library that offers a specific set of features.

ML.NET, a free, cross-platform, and open-source machine learning framework created to bring the power of ML to .NET applications for a variety of scenarios, including sentiment analysis, price prediction, recommendation, image classification, and more, was publicly unveiled by Microsoft at the Build conference in May 2018. Microsoft released ML.NET 1.0 at Build 2019 a year later, including additional tools and capabilities to make training unique ML models for .NET developers even simpler.

Basics of Machine Learning

Without explicit programming, computers can now make predictions thanks to machine learning. Problems that are challenging (or impossible) to solve using rules-based programming are handled via machine learning (e.g., if statements and for loops). You might not know where to begin, for example, if you were required to develop an application that can determine whether or not a picture contains a dog. Similar to this, you might start by looking at keywords like “long sleeves” and “business casual” if you were asked to design a function that predicts the price of a shirt based on the description of the garment, but you might not know how to build a function to scale it to a few hundred goods.

In order to analyze a large amount of data more quickly and accurately than ever before and to make data-driven decisions more easily, machine learning can automate a wide range of “human” tasks, such as classifying images of dogs and other objects or predicting values, such as the price of a car or house.

Let’s make an overview of ML.NET

 In order to build your own unique ML models, you can use the ML.NET framework. In contrast to “pre-built AI,” which involves using pre-made general AI services from the cloud, this customized approach (like many of the offerings from Azure Cognitive Services). This can be quite effective in many situations, but due to the nature of the machine learning challenge or the deployment context, it might not always meet your unique business demands (cloud vs. on-premises). You can build and train your own ML models using ML.NET, which makes it incredibly flexible and responsive to your particular data and business domain circumstances. You can run ML.NET anywhere because the framework includes libraries and NuGet packages that you can use in your .NET apps.

Who does ML.NET address?

With the aid of ML.NET, developers can quickly incorporate machine learning into practically any.NET application using their existing .NET expertise. This implies that you are no longer required to learn a different programming language, such as Python or R, if C# (or F#, or VB), is your preferred programming language, in order to create your own ML models and incorporate unique machine learning into your.NET projects. It is not necessary to have any prior machine learning experience in order to use the framework’s tooling and features to quickly design, train, and deploy superior bespoke machine learning models directly on your computer.

Is ML.NET free?

You can use ML.NET wherever you wish and in any.NET application because it is a free and open-source framework (similar in autonomy and context to other .NET frameworks like Entity Framework, ASP.NET, or even .NET Core), as shown in Figure 1. This includes desktop applications (WPF, WinForms), web apps and services (ASP.NET MVC, Razor Pages, Blazor, Web API), and more. This indicates that you can create and use ML.NET models both locally and on any cloud, including Microsoft Azure. Additionally, as ML.NET is cross-platform, you may use it with any OS, including Windows, Linux, and macOS. To integrate AI/ML into your current .NET apps, you may even run ML.NET on Windows’s default.NET Framework. You can train and use ML.NET models in offline contexts like desktop applications (WPF and WinForms) or any other offline.NET program according to this rule as well (excluding ARM processors, which are currently not supported). NimbusML Python bindings are another option provided by ML.NET. If your company employs teams of data scientists with a stronger command of Python, they can use NimbusML to construct ML.NET models in Python, which you can then use to create commercial end-user .NET applications relatively quickly while running the models as native .NET.

When utilizing NimbusML to build/train ML.NET models that can run directly in .NET apps, data scientists and Python developers familiar with scikit-learn estimators and transforms as well as other well-known libraries in Python, such as NumPy and Pandas, will feel at ease. Visit https://aka.ms/code-nimbusml to learn more about NimbusML.

You can train and use ML.NET models in offline contexts like desktop applications (WPF and WinForms) or any other offline.NET program according to this rule as well (excluding ARM processors, which are currently not supported).

ML.NET Components

You can utilize the .NET API that ML.NET offers for two different types of actions:

  • Training an ML model involves building the model, typically in your “back storage.”
  • Consumption of ML models: utilizing the model to generate predictions in real-world production end-user apps

Data components:

  • IDataView: An IDataView is a versatile, effective means of representing tabular data in NET (e.g., rows and columns). To load datasets for later processing, the IDataView serves as a placeholder. It is the component that holds the data during data transformations and model training since it is made to handle high-dimensional data and huge data sets quickly. It can handle enormous data sets up to terabytes in size because, in addition to allowing you to load data from a file or enumerable into an IDataView, you can also stream data from the original data source during training. IDataView objects may include text, Booleans, vectors, integers, and other data.
  • Data Loaders: Almost any data source can be used to load datasets into an IDataView. To load and train data directly from any relational database supported by the system, you can use the Database Loader or File Loaders for common ML sources including text, binary, and image files. data from databases like MySQL, PostgreSQL, Oracle, SQL Server, etc.
  • Data Transforms: Since mathematics is the foundation of machine learning, all data must be transformed into numbers or numeric vectors. To transform your data into a format that the ML algorithms can use, ML.NET offers a range of data transforms, including text featurizers and one-hot encoders.

ModelTraining components:

  • Classical ML:NET supports a variety of traditional ML situations and tasks, including time series, regression, classification, and more. You can choose and fine-tune the particular algorithm that achieves higher accuracy and more effectively solves your ML challenge from among the more than 40 trainers (algorithms targeting a certain task) offered by ML.NET.
  • Computer vision: Beginning with ML.NET 1.4-Preview, ML.NET also provides picture-based training tasks (image classification/recognition) with your own custom images. These tasks use TensorFlow as the training engine. Microsoft is attempting to make object detection training supportable as well.

ModelConsumption and Evaluation components:

  • Model consumption:NET offers a variety of ways to make predictions after you’ve trained your custom ML model, including using the model itself to make predictions in bulk, the Prediction Engine to make individual predictions, or the Prediction Engine Pool to make predictions in scalable and multi-threaded applications.
  • Model evaluation: should be evaluated to make sure it produces predictions of the desired caliber before being used in production. Depending on the specific ML task, ML.NET offers several evaluators relevant to each ML task so that you may determine the model’s accuracy as well as many other common machine learning metrics.

Extensions and Tools:

Integration of an ONNX model: ONNX is a standardized and versatile ML model format. Any pre-trained ONNX model can be run and scored using ML.NET.

Integration of the TensorFlow model: TensorFlow is one of the most well-liked deep learning packages. Additionally to the previously specified image classification training scenario, this API allows you to execute and score any pre-trained TensorFlow model.

Tools: To make model training even simpler, you can use ML.NET’s tools (Model Builder in Visual Studio or the cross-platform CLI). To find the best model for your data and scenario, these tools internally experiment with numerous combinations of algorithms and configurations using the ML.NET AutoML API.

Hello ML.NET

Let’s look at some ML.NET code now that you’ve seen an overview of the framework’s many parts and ML.NET itself.

It only takes a few minutes to create your own unique machine learning model with ML.NET. The code in Example 1 exemplifies a straightforward ML.NET application that trains, assesses, and uses a regression model to forecast the cost of a specific taxi ride.

// 1. Initalize ML.NET environment
MLContext mlContext = new MLContext();

// 2. Load training data
IDataView trainData = mlContext.Data.LoadFromTextFile<ModelInput>("taxi-fare-train.csv", separatorChar:',');

// 3. Add data transformations
var dataProcessPipeline = mlContext.Transforms.Categorical.OneHotEncoding(
    outputColumnName:"PaymentTypeEncoded", "PaymentType")
    .Append(mlContext.Transforms.Concatenate(outputColumnName:"Features",
    "PaymentTypeEncoded","PassengerCount","TripTime","TripDistance"));

// 4. Add algorithm
var trainer = mlContext.Regression.Trainers.Sdca(labelColumnName: "FareAmount", featureColumnName: "Features");

var trainingPipeline = dataProcessPipeline.Append(trainer);

// 5. Train model
var model = trainingPipeline.Fit(trainData);

// 6. Evaluate model on test data
IDataView testData = mlContext.Data.LoadFromTextFile<ModelInput>("taxi-fare-test.csv");
IDataView predictions = model.Transform(testData);
var metrics = mlContext.Regression.Evaluate(predictions,"FareAmount");

// 7. Predict on sample data and print results
var input = new ModelInput
{
    PassengerCount = 1,
    TripTime = 1150,
    TripDistance = 4, 
    PaymentType = "CRD"
};
var result = mlContext.Model.CreatePredictionEngine<ModelInput,ModelOutput>(model).Predict(input);
Console.WriteLine($"Predicted fare: {result.FareAmount}\nModel Quality (RSquared): {metrics.RSquared}");

Tools for Automated Machine Learning in ML.NET

Choosing the right data transformations and algorithms for your data and ML situation can be difficult, especially if you don’t have a background in data science. Although writing the code to train ML.NET models is simple, this might be a difficulty if you don’t have that expertise. However, Microsoft has automated the model selection process for you with the preview release of Automated Machine Learning and tooling for ML.NET so that you can immediately get started with machine learning in.NET without needing prior machine learning skills.

The Automated Machine Learning tool in ML.NET, also known as AutoML, operates locally on your development computer and uses a combination of the best algorithm and settings to automatically construct and train models. Simply state the machine learning goal and provide the dataset, and AutoML will select and produce the highest quality model by experimenting with a variety of algorithm combinations and associated algorithm settings.

AutoML now supports regression (for example, price prediction), binary classification (for example, sentiment analysis), and multi-class classification (for example, issue classification). Support for more scenarios is also being developed.

Although the ML.NET AutoML API allows you to utilize AutoML directly, ML.NET also provides tooling on top of AutoML to further simplify machine learning in.NET. You’ll utilize the tools in the following sections to demonstrate how simple it is to build your own ML.NET model.

ML.NET Model Builder

To create ML.NET models and model training and consumption code, you can use the ML.NET Model Builder, a straightforward UI tool in Visual Studio. To select the data transformations, algorithms, and algorithm settings for your data that would result in the best accurate model, the tool internally uses AutoML.

To get a trained machine learning model, you give Model Builder three things:

  • The scenario for machine learning
  • The data set
  • How long do you want to exercise

Let’s try out an example of sentiment analysis. You’ll develop a console application that can determine whether a comment is harmful or not.

Let’s try out an example of sentiment analysis. You’ll develop a console application that can determine whether a comment is harmful or not.

The ML.NET Model Builder Visual Studio extension must first be downloaded and installed. You may do this online here or in Visual Studio’s Extensions Manager. Model Builder works with both Visual Studio 2017 and 2019.

The Wikipedia detox dataset (wikipedia-detox-250-line-data.tsv), which is available as a TSV file must also be downloaded. Each row represents a unique review submitted by a user to Wikipedia (SentimentText), each of which is labeled as hazardous (1/True) or non-toxic (0/False).

TSV file

Data preview of Wikipedia detox dataset

 

Create a new.NET Core Console Application in Visual Studio after installing the extension and downloading the dataset.

Right-click on your project in the Solution Explorer and choose Add > Machine Learning.

NET Core ML instruction

A new tool window with ML.NET Model Builder appears. From here, as shown in the image attached below, you can select from a range of situations. Microsoft is presently trying to bring new ML situations, such as picture classification and recommendation, to Model Builder.

Select the sentiment analysis scenario to learn how to design a model that can predict which of two categories an item belongs in. This ML task is known as binary classification (in this case toxic or non-toxic).

ML.NET Model Builder

You can import data into Model Builder from a database or a file (.txt,.csv, or.tsv) (e.g., SQL Server). Choose File as your input data source in the Data screen, then upload the wikipedia-detox-250-line-data.tsv dataset. Select Sentiment as your Label from the Column to Predict (Label) drop-down, and keep SentimentText checked as the Feature.

ML.NET

Move on to the Train step, where you specify the Time to train (i.e., how long the tool will spend evaluating different models using AutoML). In general, longer training periods allow AutoML to explore more models with multiple algorithms and settings, so larger datasets will need more time to train.

Because the wikipedia-detox-250-line-data.tsv dataset is less than 1MB, change the Time to train to just 20 seconds and select Start training. This starts the AutoML process of iterating through different algorithms and settings to find the best model.

You can watch the training progress, including the time remaining, the best model and model accuracy found so far, and the last model explored in the Model Builder UI,

You’ll see output and evaluation metrics once you get to the Evaluate step. The AveragedPerceptronBinary trainer, a linear classification algorithm that excels in text classification scenarios, was chosen by AutoML as the best model after 20 seconds of exploration of five different models. Its accuracy was 70.83%.

AutoML process

ML.NET CLI

Additionally, ML.NET offers cross-platform tooling so that you can still use AutoML to quickly develop machine learning models even if you don’t work on Windows or with Visual Studio. To create high-quality ML.NET models based on training datasets you supply, you can install and use the ML.NET CLI (command-line interface), a dotnet Global Tool, on any command-prompt (Windows, macOS, or Linux). The ML.NET CLI generates sample C# code to run that model along with the C# code that was used to build and train it, similar to Model Builder, so you can examine the settings and algorithm that AutoML selected.

Roadmap for ML.NET

Even while ML.NET offers a very wide range of machine learning jobs and algorithms, there are still a lot of other intriguing areas Microsoft wishes to expand upon and enhance, such as:

 

  • Deep Learning-based computer vision training scenarios (TensorFlow)
  • Completing the Image Classification training API, adding support for the GPU, and releasing this functionality to GA
  • Developing and making available the Object Detection training API
  • CLI and Model Builder updates:
  • Create scenarios for item detection, recommendation, image classification/recognition, and other ML activities.
  • A thorough model lifecycle, model versioning and registry, surface model analysis, and explainability are all made possible by the integration and support of ML.NET in Azure ML and Azure AutoML.
  • Text analysis using TensorFlow and DNN
  • Other data loaders, like No-SQL databases
  • For inference/scoring in Xamarin mobile apps for iOS, Android, and IoT workloads, ARM support or full ONNX export support is available.
  • Targeting x64/x86 UWP app support, Unity