gRPC – JSON Transcoding with ASP.NET Core and More – Features and Manual

gRPC – JSON Transcoding with ASP.NET Core and More – Features and Manual

Transcoding JSON to gRPC

Transcoding JSON to gRPC

A powerful Remote Procedure Call (RPC) framework is gRPC. To build high-performance, real-time services, gRPC uses message contracts, Protobuf, streaming, HTTP/2, and those technologies.

The incompatibility of gRPC with all platforms is one of its drawbacks. Because HTTP/2 isn’t completely supported by browsers, REST APIs and JSON are the main methods for supplying data to browser apps. REST APIs and JSON are crucial in current apps, notwithstanding the advantages that gRPC offers. The development of gRPC and JSON Web APIs adds needless complexity.

What is gRPC JSON?

An add-on for ASP.NET Core called gRPC JSON transcoding produces RESTful JSON APIs for gRPC services. Once set up, transcoding enables apps to use well-known HTTP principles to call gRPC services:

  • HTTP verbs
  • URL parameter binding
  • JSON requests/responses

About JSON transcoding implementation

The idea of using gRPC and JSON transcoding is not new. Another tool for converting gRPC services into RESTful JSON APIs is the grpc-gateway. Let’s take a closer look at how it works.
It employs the same. proto annotations for gRPC services that map HTTP concepts. The key distinction is in how each is put into practice.

Code generation is used by grpc-gateway to build a reverse-proxy server. RESTful requests are converted by the reverse-proxy into gRPC+Protobuf and sent via HTTP/2 to the gRPC service.

In my opinion, transcoding has improved and the advantage of this strategy is that the gRPC service is unaware of the RESTful JSON APIs. Grpc-gateway can be used with any gRPC server.

An ASP.NET Core app is doing gRPC JSON transcoding in the background. After converting JSON into Protobuf messages, it directly calls the gRPC service. We think JSON transcoding benefits.NET app developers in a number of ways:

  • Less moving components One ASP.NET Core application manages both mapped RESTful JSON API and gRPC services.
  • Deserializing JSON to Protobuf messages and directly using the gRPC service are both done by JSON transcoding. When compared to initiating a fresh gRPC call to a separate server, performing this in-process offers substantial performance advantages.
  • Smaller monthly hosting bill due to fewer servers.

MVC and Minimal APIs are not replaced by JSON transcoding. It is quite opinionated about how Protobuf maps to JSON and only supports JSON.

Mark up gRPC methods:

Before they support transcoding, gRPC methods need to be annotated with an HTTP rule. The HTTP rule specifies the HTTP method and route as well as details on how to call the gRPC method.

Lets take a look on the example below:

service Greeter {
  rpc SayHello (HelloRequest) returns (HelloReply) {
    option (google.api.http) = {
      get: "/v1/greeter/{name}"
    };
  }
}

The following illustration:

  • Establishes a SayHello method for a Greeter service. The name google.api.http is used to specify an HTTP rule for the method.
  • The GET method is reachable via the /v1/greeter/name> route and GET requests.
  • The request message’s name field is linked to a route parameter.

Streaming techniques:

Streaming in all directions is supported via conventional gRPC over HTTP/2. Only server streaming is permitted without transcoding. Bidirectional streaming methods and client streaming are not supported.

Server streaming techniques employ JSON that is line-delimited. A new line follows each message published using WriteAsync, which is serialized to JSON.

Three messages are written using the server streaming technique below:

public override async Task StreamingFromServer(ExampleRequest request,
IServerStreamWriter<ExampleResponse> responseStream, ServerCallContext context)
{
  for (var i = 1; i <= 3; i++)
{
  await responseStream.WriteAsync(new ExampleResponse { Text = $"Message {i}" });
  await Task.Delay(TimeSpan.FromSeconds(1));
  }
}

Three line-delimited JSON objects are sent to the client:

{"Text":"Message 1"}
{"Text":"Message 2"}
{"Text":"Message 3"}

Please take note that server streaming methods are not covered by the WriteIndented JSON setting. Line-delimited JSON cannot be utilized with pretty printing since it introduces new lines and whitespace.

HTTP protocol

The.NET SDK’s ASP.NET Core gRPC service template builds an app that is only set up for HTTP/2. When an app only supports conventional gRPC over HTTP/2, HTTP/2 is a good default. However, transcoding is compatible with both HTTP/1.1 and HTTP/2. Some platforms, including Unity and UWP, are unable to use HTTP/2. Configure the server to enable HTTP/1.1 and HTTP/2 in order to support all client apps.

Change the appsettings.json default protocol:

{
  "Kestrel": {
  "EndpointDefaults": {
  "Protocols": "Http1AndHttp2"
  }
 }
}

The protocol negotiation required for enabling HTTP/1.1 and HTTP/2 on the same port uses TLS. See ASP.NET Core gRPC protocol negotiation for further details on setting up HTTP protocols in gRPC apps.

gRPC JSON transcoding vs gRPC-Web

The ability to call gRPC services from a browser is provided by both transcoding and gRPC-Web. Let’s take a look at the main differences between the two, everyone goes about it in a different way:

  • With the help of the gRPC-Web client and Protobuf, gRPC-Web enables browser apps to call gRPC services from the browser. The advantage of delivering brief, quick protobuf messages with gRPC-Web is that it necessitates the browser app to construct a protobuf client.
  • Transcoding enables browser-based applications to use gRPC services as if they were JSON-based RESTful APIs. The browser app does not need to create a gRPC client or be familiar with gRPC in any way.

JavaScript APIs in browsers can be used to invoke the prior Greeter service:

{
  "Kestrel": {
  "EndpointDefaults": {
  "Protocols": "Http1AndHttp2"
  }
 }
}

ML.NET – Specialized Machine Learning for the .NET Platform

ML.NET – Specialized Machine Learning for the .NET Platform

ML NET

ML.NET: Machine Learning for .NET

ML.NET is developing into a high-level API and comprehensive framework that not only makes use of its own ML features but also makes other lower-level ML infrastructure libraries and runtimes, like TensorFlow and ONNX, simpler. ML.NET is more than just a machine learning library that offers a specific set of features.

ML.NET, a free, cross-platform, and open-source machine learning framework created to bring the power of ML to .NET applications for a variety of scenarios, including sentiment analysis, price prediction, recommendation, image classification, and more, was publicly unveiled by Microsoft at the Build conference in May 2018. Microsoft released ML.NET 1.0 at Build 2019 a year later, including additional tools and capabilities to make training unique ML models for .NET developers even simpler.

Basics of Machine Learning

Without explicit programming, computers can now make predictions thanks to machine learning. Problems that are challenging (or impossible) to solve using rules-based programming are handled via machine learning (e.g., if statements and for loops). You might not know where to begin, for example, if you were required to develop an application that can determine whether or not a picture contains a dog. Similar to this, you might start by looking at keywords like “long sleeves” and “business casual” if you were asked to design a function that predicts the price of a shirt based on the description of the garment, but you might not know how to build a function to scale it to a few hundred goods.

In order to analyze a large amount of data more quickly and accurately than ever before and to make data-driven decisions more easily, machine learning can automate a wide range of “human” tasks, such as classifying images of dogs and other objects or predicting values, such as the price of a car or house.

Let’s make an overview of ML.NET

 In order to build your own unique ML models, you can use the ML.NET framework. In contrast to “pre-built AI,” which involves using pre-made general AI services from the cloud, this customized approach (like many of the offerings from Azure Cognitive Services). This can be quite effective in many situations, but due to the nature of the machine learning challenge or the deployment context, it might not always meet your unique business demands (cloud vs. on-premises). You can build and train your own ML models using ML.NET, which makes it incredibly flexible and responsive to your particular data and business domain circumstances. You can run ML.NET anywhere because the framework includes libraries and NuGet packages that you can use in your .NET apps.

Who does ML.NET address?

With the aid of ML.NET, developers can quickly incorporate machine learning into practically any.NET application using their existing .NET expertise. This implies that you are no longer required to learn a different programming language, such as Python or R, if C# (or F#, or VB), is your preferred programming language, in order to create your own ML models and incorporate unique machine learning into your.NET projects. It is not necessary to have any prior machine learning experience in order to use the framework’s tooling and features to quickly design, train, and deploy superior bespoke machine learning models directly on your computer.

Is ML.NET free?

You can use ML.NET wherever you wish and in any.NET application because it is a free and open-source framework (similar in autonomy and context to other .NET frameworks like Entity Framework, ASP.NET, or even .NET Core), as shown in Figure 1. This includes desktop applications (WPF, WinForms), web apps and services (ASP.NET MVC, Razor Pages, Blazor, Web API), and more. This indicates that you can create and use ML.NET models both locally and on any cloud, including Microsoft Azure. Additionally, as ML.NET is cross-platform, you may use it with any OS, including Windows, Linux, and macOS. To integrate AI/ML into your current .NET apps, you may even run ML.NET on Windows’s default.NET Framework. You can train and use ML.NET models in offline contexts like desktop applications (WPF and WinForms) or any other offline.NET program according to this rule as well (excluding ARM processors, which are currently not supported). NimbusML Python bindings are another option provided by ML.NET. If your company employs teams of data scientists with a stronger command of Python, they can use NimbusML to construct ML.NET models in Python, which you can then use to create commercial end-user .NET applications relatively quickly while running the models as native .NET.

When utilizing NimbusML to build/train ML.NET models that can run directly in .NET apps, data scientists and Python developers familiar with scikit-learn estimators and transforms as well as other well-known libraries in Python, such as NumPy and Pandas, will feel at ease. Visit https://aka.ms/code-nimbusml to learn more about NimbusML.

You can train and use ML.NET models in offline contexts like desktop applications (WPF and WinForms) or any other offline.NET program according to this rule as well (excluding ARM processors, which are currently not supported).

ML.NET Components

You can utilize the .NET API that ML.NET offers for two different types of actions:

  • Training an ML model involves building the model, typically in your “back storage.”
  • Consumption of ML models: utilizing the model to generate predictions in real-world production end-user apps

Data components:

  • IDataView: An IDataView is a versatile, effective means of representing tabular data in NET (e.g., rows and columns). To load datasets for later processing, the IDataView serves as a placeholder. It is the component that holds the data during data transformations and model training since it is made to handle high-dimensional data and huge data sets quickly. It can handle enormous data sets up to terabytes in size because, in addition to allowing you to load data from a file or enumerable into an IDataView, you can also stream data from the original data source during training. IDataView objects may include text, Booleans, vectors, integers, and other data.
  • Data Loaders: Almost any data source can be used to load datasets into an IDataView. To load and train data directly from any relational database supported by the system, you can use the Database Loader or File Loaders for common ML sources including text, binary, and image files. data from databases like MySQL, PostgreSQL, Oracle, SQL Server, etc.
  • Data Transforms: Since mathematics is the foundation of machine learning, all data must be transformed into numbers or numeric vectors. To transform your data into a format that the ML algorithms can use, ML.NET offers a range of data transforms, including text featurizers and one-hot encoders.

ModelTraining components:

  • Classical ML:NET supports a variety of traditional ML situations and tasks, including time series, regression, classification, and more. You can choose and fine-tune the particular algorithm that achieves higher accuracy and more effectively solves your ML challenge from among the more than 40 trainers (algorithms targeting a certain task) offered by ML.NET.
  • Computer vision: Beginning with ML.NET 1.4-Preview, ML.NET also provides picture-based training tasks (image classification/recognition) with your own custom images. These tasks use TensorFlow as the training engine. Microsoft is attempting to make object detection training supportable as well.

ModelConsumption and Evaluation components:

  • Model consumption:NET offers a variety of ways to make predictions after you’ve trained your custom ML model, including using the model itself to make predictions in bulk, the Prediction Engine to make individual predictions, or the Prediction Engine Pool to make predictions in scalable and multi-threaded applications.
  • Model evaluation: should be evaluated to make sure it produces predictions of the desired caliber before being used in production. Depending on the specific ML task, ML.NET offers several evaluators relevant to each ML task so that you may determine the model’s accuracy as well as many other common machine learning metrics.

Extensions and Tools:

Integration of an ONNX model: ONNX is a standardized and versatile ML model format. Any pre-trained ONNX model can be run and scored using ML.NET.

Integration of the TensorFlow model: TensorFlow is one of the most well-liked deep learning packages. Additionally to the previously specified image classification training scenario, this API allows you to execute and score any pre-trained TensorFlow model.

Tools: To make model training even simpler, you can use ML.NET’s tools (Model Builder in Visual Studio or the cross-platform CLI). To find the best model for your data and scenario, these tools internally experiment with numerous combinations of algorithms and configurations using the ML.NET AutoML API.

Hello ML.NET

Let’s look at some ML.NET code now that you’ve seen an overview of the framework’s many parts and ML.NET itself.

It only takes a few minutes to create your own unique machine learning model with ML.NET. The code in Example 1 exemplifies a straightforward ML.NET application that trains, assesses, and uses a regression model to forecast the cost of a specific taxi ride.

// 1. Initalize ML.NET environment
MLContext mlContext = new MLContext();

// 2. Load training data
IDataView trainData = mlContext.Data.LoadFromTextFile<ModelInput>("taxi-fare-train.csv", separatorChar:',');

// 3. Add data transformations
var dataProcessPipeline = mlContext.Transforms.Categorical.OneHotEncoding(
    outputColumnName:"PaymentTypeEncoded", "PaymentType")
    .Append(mlContext.Transforms.Concatenate(outputColumnName:"Features",
    "PaymentTypeEncoded","PassengerCount","TripTime","TripDistance"));

// 4. Add algorithm
var trainer = mlContext.Regression.Trainers.Sdca(labelColumnName: "FareAmount", featureColumnName: "Features");

var trainingPipeline = dataProcessPipeline.Append(trainer);

// 5. Train model
var model = trainingPipeline.Fit(trainData);

// 6. Evaluate model on test data
IDataView testData = mlContext.Data.LoadFromTextFile<ModelInput>("taxi-fare-test.csv");
IDataView predictions = model.Transform(testData);
var metrics = mlContext.Regression.Evaluate(predictions,"FareAmount");

// 7. Predict on sample data and print results
var input = new ModelInput
{
    PassengerCount = 1,
    TripTime = 1150,
    TripDistance = 4, 
    PaymentType = "CRD"
};
var result = mlContext.Model.CreatePredictionEngine<ModelInput,ModelOutput>(model).Predict(input);
Console.WriteLine($"Predicted fare: {result.FareAmount}\nModel Quality (RSquared): {metrics.RSquared}");

Tools for Automated Machine Learning in ML.NET

Choosing the right data transformations and algorithms for your data and ML situation can be difficult, especially if you don’t have a background in data science. Although writing the code to train ML.NET models is simple, this might be a difficulty if you don’t have that expertise. However, Microsoft has automated the model selection process for you with the preview release of Automated Machine Learning and tooling for ML.NET so that you can immediately get started with machine learning in.NET without needing prior machine learning skills.

The Automated Machine Learning tool in ML.NET, also known as AutoML, operates locally on your development computer and uses a combination of the best algorithm and settings to automatically construct and train models. Simply state the machine learning goal and provide the dataset, and AutoML will select and produce the highest quality model by experimenting with a variety of algorithm combinations and associated algorithm settings.

AutoML now supports regression (for example, price prediction), binary classification (for example, sentiment analysis), and multi-class classification (for example, issue classification). Support for more scenarios is also being developed.

Although the ML.NET AutoML API allows you to utilize AutoML directly, ML.NET also provides tooling on top of AutoML to further simplify machine learning in.NET. You’ll utilize the tools in the following sections to demonstrate how simple it is to build your own ML.NET model.

ML.NET Model Builder

To create ML.NET models and model training and consumption code, you can use the ML.NET Model Builder, a straightforward UI tool in Visual Studio. To select the data transformations, algorithms, and algorithm settings for your data that would result in the best accurate model, the tool internally uses AutoML.

To get a trained machine learning model, you give Model Builder three things:

  • The scenario for machine learning
  • The data set
  • How long do you want to exercise

Let’s try out an example of sentiment analysis. You’ll develop a console application that can determine whether a comment is harmful or not.

Let’s try out an example of sentiment analysis. You’ll develop a console application that can determine whether a comment is harmful or not.

The ML.NET Model Builder Visual Studio extension must first be downloaded and installed. You may do this online here or in Visual Studio’s Extensions Manager. Model Builder works with both Visual Studio 2017 and 2019.

The Wikipedia detox dataset (wikipedia-detox-250-line-data.tsv), which is available as a TSV file must also be downloaded. Each row represents a unique review submitted by a user to Wikipedia (SentimentText), each of which is labeled as hazardous (1/True) or non-toxic (0/False).

TSV file

Data preview of Wikipedia detox dataset

 

Create a new.NET Core Console Application in Visual Studio after installing the extension and downloading the dataset.

Right-click on your project in the Solution Explorer and choose Add > Machine Learning.

NET Core ML instruction

A new tool window with ML.NET Model Builder appears. From here, as shown in the image attached below, you can select from a range of situations. Microsoft is presently trying to bring new ML situations, such as picture classification and recommendation, to Model Builder.

Select the sentiment analysis scenario to learn how to design a model that can predict which of two categories an item belongs in. This ML task is known as binary classification (in this case toxic or non-toxic).

ML.NET Model Builder

You can import data into Model Builder from a database or a file (.txt,.csv, or.tsv) (e.g., SQL Server). Choose File as your input data source in the Data screen, then upload the wikipedia-detox-250-line-data.tsv dataset. Select Sentiment as your Label from the Column to Predict (Label) drop-down, and keep SentimentText checked as the Feature.

ML.NET

Move on to the Train step, where you specify the Time to train (i.e., how long the tool will spend evaluating different models using AutoML). In general, longer training periods allow AutoML to explore more models with multiple algorithms and settings, so larger datasets will need more time to train.

Because the wikipedia-detox-250-line-data.tsv dataset is less than 1MB, change the Time to train to just 20 seconds and select Start training. This starts the AutoML process of iterating through different algorithms and settings to find the best model.

You can watch the training progress, including the time remaining, the best model and model accuracy found so far, and the last model explored in the Model Builder UI,

You’ll see output and evaluation metrics once you get to the Evaluate step. The AveragedPerceptronBinary trainer, a linear classification algorithm that excels in text classification scenarios, was chosen by AutoML as the best model after 20 seconds of exploration of five different models. Its accuracy was 70.83%.

AutoML process

ML.NET CLI

Additionally, ML.NET offers cross-platform tooling so that you can still use AutoML to quickly develop machine learning models even if you don’t work on Windows or with Visual Studio. To create high-quality ML.NET models based on training datasets you supply, you can install and use the ML.NET CLI (command-line interface), a dotnet Global Tool, on any command-prompt (Windows, macOS, or Linux). The ML.NET CLI generates sample C# code to run that model along with the C# code that was used to build and train it, similar to Model Builder, so you can examine the settings and algorithm that AutoML selected.

Roadmap for ML.NET

Even while ML.NET offers a very wide range of machine learning jobs and algorithms, there are still a lot of other intriguing areas Microsoft wishes to expand upon and enhance, such as:

 

  • Deep Learning-based computer vision training scenarios (TensorFlow)
  • Completing the Image Classification training API, adding support for the GPU, and releasing this functionality to GA
  • Developing and making available the Object Detection training API
  • CLI and Model Builder updates:
  • Create scenarios for item detection, recommendation, image classification/recognition, and other ML activities.
  • A thorough model lifecycle, model versioning and registry, surface model analysis, and explainability are all made possible by the integration and support of ML.NET in Azure ML and Azure AutoML.
  • Text analysis using TensorFlow and DNN
  • Other data loaders, like No-SQL databases
  • For inference/scoring in Xamarin mobile apps for iOS, Android, and IoT workloads, ARM support or full ONNX export support is available.
  • Targeting x64/x86 UWP app support, Unity
System.Text.Json in .NET 7 – What`s New? Theory and Practice

System.Text.Json in .NET 7 – What`s New? Theory and Practice

NET System.Text.JSON

System.Text.Json in .NET 7

 System.Text.Json’s has been greatly improved in .NET 7 through the addition of new performance-oriented features and the resolution of critical reliability and consistency issues. Contract customization provides you more control over how types are serialized or deserialized, polymorphic serialization for user-defined type hierarchies, necessary member support, and many other features are all introduced with the release of .NET 7.

Contract Customization

By creating a JSON contract for a given .NET type, System.Text.Json can decide how that type should be serialized and deserialized. The contract is derived either at runtime using reflection or at compile time using the source generator from the type’s shape, which includes the constructors, properties, and fields that are available as well as whether it implements IEnumerable or IDictionary. Users have historically had some limited control over the derived contract through the use of System.Text.Json attribute annotations, provided they have the ability to change the type declaration.

Previously utilized as an opaque token only in source generator APIs, JsonTypeInfo<T> now serves as a representation of the contract metadata for a specific type T. Most aspects of the JsonTypeInfo contract metadata have been made visible and editable for users as with .NET 7. By employing implementations of the IJsonTypeInfoResolver interface, contract customization enables users to create their own JSON contract resolution logic:

public interface IJsonTypeInfoResolver
{
    JsonTypeInfo? GetTypeInfo(Type type, JsonSerializerOptions options);
}

For the specified Type and JsonSerializerOptions combination, a contract resolver returns a JsonTypeInfo object that has been configured. If the resolver does not support metadata for the designated input type, it may return null.

The DefaultJsonTypeInfoResolver class, which implements IJsonTypeInfoResolver, now exposes contract resolution done by the reflection-based default serializer. By using this class, users can add their own specific modifications to the standard reflection-based resolution or mix it with other resolvers (such as source-generated resolvers).

The JsonSerializerContext class, which is used to generate source code, has been implementing IJsonTypeInfoResolver since .NET 7. See How to use source generation in System for additional information on the source generator. Text.Json.

The new TypeInfoResolver field allows a custom resolver to be added to a JsonSerializerOptions instance:

// configure to use reflection contracts
var reflectionOptions = new JsonSerializerOptions 
{ 
  TypeInfoResolver = new DefaultJsonTypeInfoResolver()
};
// configure to use source generated contracts
var sourceGenOptions = new JsonSerializerOptions 
{ 
  TypeInfoResolver = EntryContext.Default 
};
[JsonSerializable(typeof(MyPoco))]
public partial class EntryContext : JsonSerializerContext { }

Type Hierarchies in .NET

The System.Text.Json. class, which is used to generate source code, has been implementing JsonDerivedTypeAttribute since.NET 7. See How to use source generation in System for additional information on the source generator. Text.Json.

The new TypeInfoResolver field allows a custom resolver to be added to a JsonSerializerOptions instance:

[JsonDerivedType(typeof(DerivedClass))]
public class Temp
{
   public int i { get; set; }
}
  public class DerivedClass : Temp
{
  public int j { get; set; }
}

The aforementioned code example makes the Base class’s polymorphic serialization possible.

Base val = new DerivedClass();
JsonSerializer.Serialize(value);

DerivedClass is the run-time type listed above.

Required Members in .NET

The addition of support for needed members came with C# 11. This enables class creators to select which fields, properties, and reflection serialization should be filled in during instantiation. Here is a brief C# code example:

using System.Text.Json;
JsonSerializer.Deserialize("""{"StudentCourse": 7}"""); 
public class Student
{
  public required string StudentName { get; set; }
  public int NumberOfSubjects { get; set; }
}

A JSON exception will be raised as a result of this since the necessary parameter (StudentName) was removed.

JsonSerializerOptions.Default

Developers can now pass and access a read-only instance of JsonSerializerOptions by making use of the JsonSerializerOptions. static default property Here is some code that exemplifies this:

public class Convert : JsonConverter
{
  private readonly static JsonConverter s_default = 
  (JsonConverter)JsonSerializerOptions.Default.GetConverter(typeof(int));
  public override void Write(Utf8JsonWriter w, int val, JsonSerializerOptions o)
    {
      return w.WriteStringValue(val.ToString());
    }
  public override int Read(ref Utf8JsonReader r, Type type, JsonSerializerOptions o)
    {
      return s_default.Read(ref r, t, o);
    }
}

Performance upgrades

The quickest System is released with.NET 7. not yet Text.Json. Both the internal implementation and the user-facing APIs have undergone a number of performance-focused upgrades. for a thorough analysis of System. Please refer to the appropriate section of Stephen Toub’s article, Performance Improvements in.NET 7 for more information on the Text.Json performance improvements.

Closing

This year, we have concentrated on improving System.Text.Json’s extensibility, consistency, and reliability while also committing to yearly performance improvements.

Extension methods in C#: Consideration of Capabilities and Features

Extension methods in C#: Consideration of Capabilities and Features

C# extension methods

How to Use Extension Methods in C#

Programming requires a person to provide a computer with specific instructions, which is typically a time-consuming operation. We can speak with computers thanks to computer programming languages, but doing so necessitates providing them a long list of directives. For many programmers, extension techniques greatly facilitate this procedure. A programmer can utilize an extension method to avoid having to create an entire method from scratch in order to run the program because it already has the basic structure in place.

There are many different ways to use extension methods in C# programming, and there are some situations when users write their own methods to reuse in subsequent programs. You can use some of the most significant extension methods in the.NET Framework in addition to developing your own if you so choose. Your life as a programmer can be made a lot simpler by combining C# and.NET.

What is an Extension Method?

In the realm of computer programming, extension methods are rather prevalent, and whether a programmer utilizes them in.NET or LINQ, they may accomplish a lot of work with them. In reality, C# is the only program—if not the only one—that naturally permits you to use extension methods with a modicum of simplicity.

Simply said, an extension method allows you to extend a data type in your software. This method defines a static method in a static class in the most basic terms. The first parameter in the method is then identified as the data type that is being extended by the method by identifying the “this” that is put in front of it.

An illustration of an extension method in use may be found below.

.public static class Extensions
{
  Public static int OneHalf (this int source)
{
  Return source / 2;
}
}

Take note of a few of the program’s main components. The class is named and identified in the first line of the program, public static class Extensions. Take note of the word “static” in the class’s identification. You must ensure the class is static in order to use an extension method.

Public static int OneHalf is the program’s next line (this int source). There are important things in this section of the software that you should also keep in mind. The method begins with this program, which must be static like the class. Additionally, there is the int data type. OneHalf is an integer when used with this enhanced technique.

How Extension Methods Make Programming Easier

public static class ExtensionInt
{
  public static int OneHalf(this int solution)
  {
    return solution / 2;
  }
    public static int ExponentCube(this int solution)
  {
    return (int)Math.Pow(solution, 3);
  }
    public static int ExponentSquare(this int solution)
  {
    return (int)Math.Pow(source, 2);
  }
}

The aforementioned techniques show how extension methods can be used to simplify programming. View the aforementioned three approaches. They all solve the same mathematical issue. The first technique divides a number by two. The second technique involves multiplying a number by three exponentially. The second approach does the same task, but takes the integer and multiplies it by two instead of increasing it exponentially by three.

Let’s consider an example where you wished to multiply a value exponentially by 3 or cube it. The new result was then multiplied exponentially by two, and the outcome was eventually divided by two. With conventional programming techniques, you would produce a line of code that was exceedingly difficult. Let’s examine how 25 would traditionally be done in this situation.

var answer = ExtensionsInt.ExponentCube(ExtensionsInt.ExponentSquare(ExtensionsInt.OneHalf(25)));

As you can see from the example above, solving a problem like this requires a lot of nesting. You must make sure that everything is neatly nested together and that it is done in the correct order. If you were to make a mistake, you would need to verify each character in this line of code to ensure that everything still makes sense. Many people make the error of not knowing how many parenthesis are involved in a situation like this.

On the other hand, this line of code would be much shorter if you were to use extension methods.

var answer = 25.ExponentCube().ExponentSquare().OneHalf();

The code above accomplishes the same task as the code you previously saw, but it is much shorter and much simpler to comprehend. If the program makes a mistake, it would be simple to spot as it is so obvious what it is doing. This is the strength of extension methods—they provide you the ability to add functionality to a parameter that you otherwise might not have had. When compared to writing programs without extension methods, your programs will perform considerably more smoothly.

Important things to keep in mind about C# extension methods:

When utilizing C# extension methods, keep the following in mind:

  1. Only the static class may contain definitions of extension methods.
  2. Extension methods should always be specified as static methods, but they will change into non-static methods when they are bound to any class or structure.
  3. The binding argument, which should contain the name of the class to which the method must be bound, should be prefixed with the binding keyword. It is the first parameter of an extension method.
  4. Only one binding parameter can be specified for an extension method, and it must be first in the parameter list.
  5. If necessary, a normal parameter can be added to an extension method definition starting from the second position in the parameter list.
  6. By including the extension method’s namespace, extension methods can be utilized anywhere in the application.
  7. Method overriding is not supported by the extension method.
  8. It has to be defined in a static class at the top level.
  9. Fields, properties, and events are excluded.

Advantages:

  1. The main benefit of the extension method is that it avoids using inheritance to add new methods to the existing class.
  2. Without changing the existing class’s source code, new methods can be added to it.
  3. Additionally, it can be used with closed classes.

Closing

Hope this helped you understand what C# extension methods are and how to utilize them. They give us the ability to quickly add additional functionality to an existing type without having to either develop a wrapper for it or inherit from it.

Make careful you don’t misuse it, as you should with other design patterns! Everything has its place. Extension techniques are certainly cool, but they might not

always be the answer to your issue. Therefore, use caution in how and where you choose to utilize them.

NULL in C# and Nullable Types – Explanations With Detailed Examples

NULL in C# and Nullable Types – Explanations With Detailed Examples

Null C#

What are NULL and Nullable Types in C#

 The C# compiler forbids you from giving a variable a null value. As a result, C# 2.0 has a specific feature known as the Nullable type that allows you to give a null value to a variable. You can give a variable’s variable a null value by using the Nullable type. Introduced in C# 2.0, nullable types can only interact with Value Types and not with Reference Types.

To explicitly declare whether a reference type can or cannot carry a null value, the nullable types for Reference Type were added later in C# 8.0 in 2019. This enabled us to address the NullReferenceException problem without the use of conditionals. The subject in this article centers on the nullable types for value types.

A System instance is a Nullable type. T-based nullable struct. In this instance, T is a type that includes non-nullable value types like integer, floating-point, boolean, etc. For instance, you can store null values or values between -2147483648 and 2147483647 in an integer type nullable.

Null Syntax

Nullable<data_type> variable_name = null;

The C# nullable data type is represented by the aforementioned syntax. The term “nullable” refers to the System instance-based nullable type. T stands for non-nullable value types like Boolean, integer, floating-point, etc. in the NullableT>struct. The variable name is assigned a null value and represents the variable’s name while the data type indicates the variable’s data type.

Or you can use a shortcut that combines the data type with the? operator.

datatype? variable_name = null;

Example:

// This will result in a compilation error. 
int j = null; 

// True declaration
Nullable<int> j = null; 

// True declaration
int? j = null;

Is There a Difference Between Nullable<T> and ? Annotation?

Both “NullableT>” and “?” are equivalent. ‘NullableT>’ is abbreviated as ‘?’.

The “NullableT>” type is merely a wrapper around the real value, which, unless otherwise defined, will always contain null values. It won’t act differently from the underlying type because it isn’t a new type.

How Can I Get the Value of a Variable of the Nullable Type?

The Nullable type’s value is not directly accessible. If the value is not null, you must use the GetValueOrDefault() function to retrieve the original value. If it is null, you will receive the default value. Zero will be the default value for null.

Example:

// C# example program for Nullable Types 
using System;
class Geeks {
// Main Method static void Main(string[] args) { // specifying the Nullable type Nullable<int> n = null; // using the method // output will by default be 0. // value of null is 0 Console.WriteLine(n.GetValueOrDefault()); // specifying the Nullable type int? n1 = null; // using the method // output will by default be 0. // value of null = 0 Console.WriteLine(n1.GetValueOrDefault());
// specifying the Nullable type as syntax // to define non-nullable int? n2 = 47; // using the method Console.WriteLine(n2.GetValueOrDefault()); // specifying the Nullable type as syntax // to define non-nullable Nullable<int> n3 = 457; // using the method Console.WriteLine(n3.GetValueOrDefault()); } }

Result: 

0
0
47
457

Characteristics of Nullable Types in C#

 

  • The nullable type’s values cannot be accessed directly. If it is not assigned to null, the GetValueOrDefault() method is called to retrieve the assigned value; otherwise, the default value of zero is returned.
  • A nullable type can be used to assign a null value to a variable without the need to first establish a nullable type based on the reference type.
  • Nullable types can have values set to them.
  • You may check the value using Nullable HasValue and Nullable. When an object is given a value, true is returned; when an object is given null, false is returned. A compile-time error is raised if the object has no value allocated to it.
  • Nullable types can be used with the == and! operators.
  • The GetValueOrDefault(T) method returns the assigned value or the default value that was set as default if the nullable type is assigned null.
  • The value of the nullable type can also be assigned a value using the null-coalescing operator (??).
  • Nullable types do not support nested nullable structures.
  • Var type is not supported by a nullable type. If nullable is utilized with var, the compiler issues a Compile-Time Error.

How to Determine if a Value Type is Nullable

It is not simple to recognize a nullable value type at runtime. Use Nullable to see if a given instance belongs to a nullable value type, a function called Nullable.GetUnderlyingType(Type nullableType) method.

Returning null from GetUnderlyingType. yields the Nullable type’s underlying type if not.

Apply the extension approach. Use the following extension method to make it simple to check for nullable value types:

public static bool IsOfNullableType<T>(this T _)
{
    Type type = typeof(T);
    return Nullable.GetUnderlyingType(type) != null;
}
int? num = null; 
Console.WriteLine(num.IsOfNullableType()); // True

Don’t Use Object.GetType

Never depend on the Object. Use the GetType function to check if a specific object is a nullable value type. Requesting the object An instance of a nullable value type that is boxed to Object by the GetType function. The underlying type is represented by the Type returned by GetType when boxing a value of a non-null instance of a nullable value GetType.

Don’t Use  ‘is’ Operator

A nullable value type cannot be found using the “is” operator. If an instance belongs to the provided type, the is operator returns true; otherwise, it returns false. In contrast, an instance of a nullable value type can either be of the specified type or nullable.

Nullable Type’s Benefits in C#

  • Database applications employ nullable types. A nullable type can be used to assign null values to a column in a database that needs them.
  • Nullable types can be used to represent undefined values.
  • Instead of utilizing a reference type, a nullable type can be used to hold a null value.

Helper Class for Null

Since null has a lower value than any other value, comparison operators cannot be used with null; instead, we use the static class nullable. As a helper class for Nullable types, it is seen as useful. The static class that is nullable has a method called GetUnderlyingType. This function returns the type argument of the nullable types.

Nullable Type in C# Operation

Value types, such as numbers, are examples of primitive data types. Even if they are not explicitly initialized when they are defined, value types are saved in the stack and implicitly initialized to their default values by the.NET framework. For instance, a Boolean variable is initially set to false by default, but an integer value is initialized to zero by default. Likewise, default values are represented by all value types. None of them can accurately represent null values, despite the fact that database applications frequently use null values and that doing so is crucial. Any value chosen to represent the null value may not be within the authorized range for the data type of the value. If we chose -1 to represent null for a value type, for instance, -1 might not be an acceptable value for that data type. Additionally, one must ensure that the value chosen to represent the null value in an application is never used for any other purposes within the application. The nullable type was made available by C# 2.0 to address this problem. The organization of the System. The following are examples of nullable types that can be used to define them:

Code:

namespace System
{
  public struct Nullable : System.IComparable, System.INullableValue
{
  public Nullable(V value);
  public static explicit operator V(V? value);
  public static implicit operator V?(V value);
  public V Value { get; }
  public bool HasValue { get; }
  public V GetValueOrDefault();
}
}

Here, the structure accepts a single parameter, and the letter T stands for value type. The syntax can be used to define any value as a nullable type.

Lifted Operators

The corresponding nullable value type T? supports the predefined unary and binary operators as well as any overloaded operators that are supported by a value type T. When one or both operands are null, these operators, also referred to as lifted operators, produce null; otherwise, the operator calculates the result using the contained values of its operands. For instance:

int? e = 10;
int? r = null;
int? t = 10;
e++; // e is 11
e = e * t; // e is 110
e = e + r; // e is null

If one or both operands are null for the comparison operators, >, =, and >=, the result is false; otherwise, the operands’ contained values are compared. Never assume that just because one comparison (like =) returns false, the corresponding comparison (>) will as well. The example below demonstrates that 10 is

  • neither more nor less than zero
  • nor more than zero

int? i = 10;
Console.WriteLine($"{i} >= null is {i >= null}");
Console.WriteLine($"{i} < null is {i < null}");
Console.WriteLine($"{i} == null is {i == null}");
// Output:
// 10 >= null is False
// 10 < null is False
// 10 == null is False
int? q = null;
int? w = null;
Console.WriteLine($"null >= null is {q >= w}");
Console.WriteLine($"null == null is {q == w}");
// Output:
// null >= null is False
// null == null is True

If both operands are null for the equality operator ==, the result is true; if only one operand is null, the result is false; otherwise, the operands’ contained values are compared.

When using the inequality operator!=, the result is false if both operands are null, true if only one operand is null, and else the operands’ contained values are compared.

If a user-defined conversion between two value types already exists, it can be applied to the corresponding nullable value types as well.

Lets Take a Lot at Some General Examples:

Example #1

A C# program that displays nullable types when variables have no value

Code:

// a C# program to demonstrate
// the of Nullable types
using System;
class GFG {
  // Main Method
  static public void Main()
  {
    // a is a nullable type
    // and contains a null value
    int ? a = null;
    // b is a nullable type int
    // and behave as a normal int
    int ? b = 2345;
    // this will not print
    // anything on the console
    Console.WriteLine(a);
    // gives 2345 as output
    Console.WriteLine(b);
  }
}

Output:

0
0
200
10

Example #2

To demonstrate the use of nullable, a C# program is used. Method HasValue.

Code:

using System;
public class GFG {
  //Defining Main Method
  public static void Main()
  {
    // creating a nullable type for the variable e and giving it a value
    Nullable<int> e = 100;
    // the value of the variable e is checked
    Console.WriteLine(e.HasValue);
    // creating a nullable type for the variable I and giving it a value
    Nullable<int> i = null;
    // check the value of the object
    Console.WriteLine(i.HasValue);
  }
}

Output:

True
False

Conclusion

In this tutorial, we first define the idea of a nullable type before learning how it actually functions. Then, we comprehend how various C# programs that use nullable types operate and how their output snapshots are integrated with the programs’ output results.

GTP-4 AI Language Model: Deep Dive Into Advanced Technology

GTP-4 AI Language Model: Deep Dive Into Advanced Technology

GPT-4

All Information Regarding GPT-4

We live in unusual times, where each new model launch dramatically transforms the field of artificial intelligence. OpenAI unveiled DALLE2, a cutting-edge text-to-image model, in July 2022. Stability followed a few weeks later. Stable Diffusion, an open-source DALLE-2 variant, was introduced by AI. Both of these well-liked models have demonstrated promising outcomes in terms of both quality and capability to comprehend the prompt.

Whisper is an Automatic Speech Recognition (ASR) model that was just released by OpenAI. In terms of accuracy and robustness, it has fared better than any other model.

We can infer from the trend that OpenAI will release GPT-4 in the next months. Large language models are in high demand, and the success of GPT-3 has already shown that people anticipate GPT-4 to deliver better accuracy, compute optimization, lower biases, and increased safety.

Despite OpenAI’s silence regarding the release or features, we will make some assumptions and predictions about GPT-4 in this post based on AI trends and the data supplied by OpenAI. Additionally, we will study extensive language models and their uses.

What is GPT?

A text generation deep learning model called Generative Pre-trained Transformer (GPT) was learned using internet data. It is employed in conversation AI, machine translation, text summarization and classification.

By studying the Deep Learning in Python skill track, you can learn how to create your own deep learning model. You will learn about the principles of deep learning, the Tensorflow and Keras frameworks, and how to use Keras to create numerous input and output models.

GPT models have countless uses, and you can even fine-tune them using particular data to produce even better outcomes. You can cut expenditures on computing, time, and other resources by employing transformers.

Prior to GPT

Most Natural Language Processing (NLP) models have been trained for specific tasks like categorization, translation, etc. before GPT-1. Each of them was utilizing supervised learning. Two problems emerge with this form of learning: the absence of labeled data and the inability to generalize tasks.

GPT-1

Improving Language Understanding by Generative Pre-Training, GPT-1 (117M parameters) paper, was released in 2018. It has suggested a generative language model that was honed for particular downstream tasks like classification and sentiment analysis and trained on unlabeled data.

GPT-2

Language Models are Unsupervised Multitask Learners, GPT-2 (1.5B parameters) paper, was released in 2019. To create an even more potent language model, it was trained on a larger dataset with more model parameters. To enhance model performance, GPT-2 employs task conditioning, zero-shot learning, and zero short task transfer.

GPT model

GPT-3

Language Models are Few-Shot Learners, GPT-3 (175B parameters) study, was released in 2020. Compared to GPT-2, the model contains 100 times more parameters. In order to perform well on jobs down the road, it was trained on an even larger dataset. With its human-like tale authoring, SQL queries, Python scripts, language translation, and summarizing, it has astounded the world. Utilizing in-context learning, few-shot, one-shot, and zero-shot settings, it has produced a state-of-the-art outcome.

What’s New in GPT-4?

Sam Altman, the CEO of OpenAI, confirmed the reports regarding the introduction of the GPT-4 model during the question-and-answer portion of the AC10 online meetup. This section will make predictions about the model size, optimal parameter and computation, multimodality, sparsity, and performance utilizing that data in conjunction with current trends.

Model Size

Altman predicts that GPT-4 won’t be significantly larger than GPT-3. Therefore, we can assume that it will have parameters between 175B and 280B, similar to Deepmind’s Gopher language model.

With 530B parameters, the large model Megatron NLG is three times bigger than GPT-3 yet performs comparably. Higher performance levels were attained by the subsequent smaller model. Simply put, more size does not equate to better performance.

According to Altman, they are concentrating on improving the performance of smaller models. It was necessary to use a massive dataset, a lot of processing power, and a complicated implementation for the vast language models. For many businesses, even installing huge models is no longer cost-effective.

Ideal parameterization

Large models are typically not optimized enough. Companies must choose between accuracy and cost because training the model is costly. For instance, GPT-3 was only trained once, despite mistakes. Researchers were unable to do hyperparameter tuning due to prohibitive prices.

It has been demonstrated by Microsoft and OpenAI that GPT-3 could be enhanced with the use of appropriate hyperparameter training. According to the results, a 6.7B GPT-3 model with tuned hyperparameters improved performance by the same amount as a 13B GPT-3 model.

The best hyperparameters for the larger models with the same architecture are the same as the best for the smaller ones, according to a novel parameterization theory (P). It has made it much more affordable for academics to optimize big models.

Ideal computation

Microsoft and OpenAI demonstrated last month that GPT-3 might be enhanced further if the model was trained with the ideal hyperparameters. They discovered that a 6.7B version of GPT-3 significantly improved its performance to the point where it was on par with the model’s initial 13B performance. For smaller models, hyperparameter optimization produced a performance boost equivalent to doubling the number of parameters. The best hyperparameters for a small model were also the best for a larger one in the same family, according to a new parameterization they discovered (called P). They were able to optimize models of any size using P for a tiny portion of the cost of training. The larger model can then almost instantly get the hyperparameters.

Models with optimal computing

Recently, DeepMind went back to Kaplan’s research and discovered that, in contrast to popular belief, the amount of training tokens had just as much of an impact on performance as model size. They came to the conclusion that more computing budget should be distributed equally between scaling parameters and data. By training Chinchilla, a 70B model (four times smaller than Gopher, the previous SOTA), with four times as much data as all major language models since GPT-3 (1.4T tokens, as opposed to the average 300B), they were able to demonstrate their hypothesis. The outcomes were unmistakable. Across a variety of language benchmarks, Chinchilla outperformed Gopher, GPT-3, MT-NLG, and all other language models “uniformly and significantly”: Models today are big and undertrained.

Given that GPT-4 will be slightly bigger than GPT-3, it would require about 5 trillion training tokens to be compute-optimal, according to DeepMind’s findings. This is an order of magnitude more training tokens than are now available. Using Gopher’s compute budget as a proxy, they would need between 10 and 20 times more FLOPs to train the model than they used for GPT-3 in order to achieve the lowest training loss. When he indicated in the Q&A that GPT-4 will need a lot more computing than GPT-3, Altman might have been alluding to this.

Although the extent to which OpenAI will incorporate optimality-related findings into GPT-4 is unknown due to their funding, it is certain that they will do so. They’ll undoubtedly concentrate on optimizing factors other than model size, that much is certain. Finding the ideal compute model size, a number of parameters and a collection of hyperparameters could lead to astounding improvements in all benchmarks. If these methods are merged into a single model, then all forecasts for language models will be wrong.

Altman added that if models weren’t made larger, people wouldn’t believe how wonderful they could be. He might be implying that scaling initiatives have ended for the time being.

GPT-4 will only be available in text

Multimodal models are the deep learning of the future. Because we inhabit a multimodal world, human brains are multisensory. AI’s ability to explore or comprehend the world is severely constrained by only perceiving it in one mode at a time.

However, creating effective multimodal models is much more difficult than creating an effective language- or vision-only models. It is difficult to combine visual and verbal information into a single representation. We have very little understanding of how the brain functions, thus we are unsure of how to implement it in neural networks (not that the deep learning community is taking into account insights from the cognitive sciences on brain anatomy and operation). In the Q&A, Altman stated that GPT-4 will be a text-only model rather than multimodal (like DALLE or MUM). Before moving on to the next iteration of multimodal AI, I’m going to venture a guess that they’re trying to push language models to their breaking point.

Sparsity

Recent research has achieved considerable success with sparse models that use conditional computing to process various input types utilizing various model components. A seemingly orthogonal relationship between model size and compute budget, is produced by these models, which scale beyond the 1T-parameter threshold without incurring significant computational expenses. On very large models however the advantages of MoE techniques are reduced.

It makes sense to assume that GPT-4 will also be a dense model given the history of OpenAI’s emphasis on dense language models. Altman also stated that GPT-4 won’t be significantly bigger than GPT-3, therefore we may infer that sparsity is not an option for OpenAI at this time.

Given that our brain, which served as the inspiration for AI, significantly relies on sparse processing, sparsity, similar to multimodality, will most certainly predominate the future generations of neural networks.

AI positioning

The AI alignment problem, which is how to get language models to follow our intents and uphold our ideals, has been the focus of a lot of work at OpenAI. It’s a challenging subject both theoretically (how can we make AI grasp what we want precisely?) and philosophically (there isn’t a single way to make AI align with humans because human values vary greatly between groups and are sometimes at odds with one another).

 

InstructGPT, a redeveloped GPT-3 taught with human feedback to learn to follow instructions (whether those instructions are well-intended or not is not yet integrated into the models), is how they made their initial effort.

The primary innovation of InstructGPT is that, despite its performance on language benchmarks, it is rated as a better model by human judges (who are made up of of English speakers and OpenAI staff, so we should be cautious when drawing generalizations). This emphasizes the need to move past using benchmarks as the sole criteria to judge AI’s aptitude. Perhaps even more crucial than the models themselves is how humans interpret them.

Given Altman and OpenAI’s dedication to creating a useful AGI, I do not doubt that GPT-4 will put the information they gathered from InstructGPT to use and expand upon.

Given that it was only available to OpenAI personnel and English-speaking labelers, they will improve the alignment process. True alignment should involve individuals and groups with various backgrounds and characteristics in terms of gender, ethnicity, nationality, religion, etc. Any progress made in that direction is appreciated, though we should be careful not to refer to it as alignment since most people don’t experience it that way.

Conclusion

Model size: GPT-4 will be slightly larger than GPT-3 but not by much when compared to the largest models currently available (MT-NLG 530B and PaLM 540B). Model size won’t be a defining characteristic.

Ideally, GPT-4 will consume greater processing power than GPT-3. It will apply fresh optimality ideas to scaling rules and parameterization (optimal hyperparameters) (the number of training tokens is as important as model size).

Multimodality: The GPT-4 model will solely use text (not multimodal). Before switching entirely to multimodal models like DALLE, which they believe will eventually outperform unimodal systems, OpenAI wants to fully utilize language models.

Sparsity: GPT-4 will be a dense model, continuing the trend from GPT-2 and GPT-3 (all parameters will be in use to process any given input). In the future, sparsity will predominate more.

GPT-4 will be more in line with us than GPT-3 in terms of alignment. It will apply the lessons learned from InstructGPT, which was developed with human input. However, there is still a long way to go before AI alignment, so efforts should be carefully considered and not overstated.