A388

Windows Server 2019 Migration, Windows 10 Cleaner Bug Fix, and Other News

I spent most of the weekend migrating to Windows Server 2019, coming from Windows Server 2016. I didn't really have a reason to do so other than to dip my toes into Server 2019, and to be on the latest and greatest. Everything went pretty smooth, mostly, so now I'm on Server 2019.

In the process of migrating to Windows Server 2019, I decided to use my handy-dandy Windows 10 Cleaner PowerShell script..., and found out that it had a bug. Once I finished the migration, I dove into figuring out what was causing the bug. It turned out that my HTML minifier middleware was minifying every response, regardless of its content type. After a few hours, I fixed it to only minify text/html responses, and the script is now being sent correctly. Yay!

Lastly, after the migration, I found out that Visual Studio 2017 FTP publishing over TLS is ancient and only supports up to TLS 1.0. I'm not happy about it because I was really trying to get to an A+ rating on SSL Labs, but I couldn't simply so I can publish from within Visual Studio. Really not happy about it. If anyone runs into an issue like that, you have to at least have the TLS_RSA_WITH_AES_256_CBC_SHA (0x35) cipher enabled on your server, if you have control of that. Microsoft, please update and fix the FTP publishing in Visual Studio 2019 to support TLS 1.2 and higher!

Arex388.Geocodio 1.2.0 Released

This update includes backwards compatibility support. The GeocodioClient now accepts a third argument for the endpoint version. By default the client will always use the latest version, but it can be changed using the new EndpointVersions constant.

Arex388.Geocodio 1.1.0 Released

This update includes full support for all fields and their return values. Originally, I had implemented the lookup for fields, but I omitted the return values for some reason. Since Geocodio had an update to the fields I decided to rectify this omission. Please be aware that field lookups will result in additional charges from Geocodio.

Windows 10 Cleaner v2 Released

DEPRECATED

This version has been deprecated and is no longer available. The latest version can be found at its new website cleaner10.io.

I'm happy to announce the release of v2 of my Windows 10 Cleaner PowerShell script. I rewrote the script from scratch to be more automated, fix some bugs, and improve its performance (where possible). It is also now slightly customizable and you can select what it does. The script has been tested on Windows 10 1803 (April 2018 Update) and 1809 (October 2018 Update) ONLY.

CSAND's Windows 10 Decrapifier script is still being used, except this time I won't make you download it, the script will do it for you and run it immediately.

Download the script here.

Projection-View Pattern: Improving on the Vertical Slices Architecture

Around four and a half years ago, I started a project at my work with the goal of it becoming our admin system and to remove our painfully expensive dependency on Salesforce. When I started the project, I applied a layered design to the architecture because I thought it was going to be robust and scalable, and everyone was doing it. After some time, it started to become clear that it was anything but that. Time was precious and I had to commit to the architecture.

The solution was split into 10 different projects. Some were utilities or extensions, and others were the core of the system. Those core projects were very tightly coupled to each other and were very rigid. Changes needed to propagate all the way through all the layers and sometimes they didn't quite fit or feel right. Slowly, but surely, the system started to become difficult to maintain, and almost impossible to add new features to.

As I kept going with this monster I had created, I happened to read an article by Jimmy Bogard about the Vertical Slices architecture. I "understood" it, but it didn't become very clear until I watched the recording of a talk he did about the architecture. There's a more recent talk he did on the architecture that I found as I was preparing my material for this post. In the talks he shows how MediatR came about and how to use it to implement a Vertical Slices architecture, so I decided to give it a shot since the layered architecture I was using wasn't going to get any better.

I had some trial and error periods until I started to get the hang of it and morph my thought process to the new architecture. While I started to get some improvements, I also started to run into issues with the way I was querying the database. Jimmy came to the rescue again with another article where he spoke of Entity Framework Plus and its future queries.

After implementing EFP the load on the database improved, but it wasn't quite enough. You see, I was carrying over a relic from the old architecture where I had set up massive projections in AutoMapper to literally project into a complete View. The more data I was trying to query out into a view, the larger and more bloated the queries became. Entity Framework didn't help with the bloat that it generates, but I was easily having queries that were 200+ lines long.

I thought about how to improve the situation and realized that the answer was actually really simple, just break it up into smaller queries. The query into a view "pattern" came about because I was trying to be efficient when querying the database. With EFP in the picture now, I could accomplish the same thing, without having bloated, complex queries.

After breaking up the queries I saw massive improvements, but I also noticed that the projection models didn't always match the view models. This is how what I call the Projection-View pattern came to be. I split the query handlers into two mappings. First, I projected the data from the database into DTOs, then, I mapped those projections to the final view models. Sometimes the projection and view models were the same, but quite often they were not.

So, what does the Projection-View Pattern Look Like?

Let's see if I can make this make more sense with some code examples. First, I introduced a couple of base classes:

  • ProjectionBase, which was responsible for holding onto the projection models.
  • ViewBase, which was responsible for the final view models.
    • Technically, I already had this from long ago, but it was serving a new more proper purpose.
  • QueryHandlerBase<TQuery, TProjection, TView>, which was the new handler for the queries and would perform the mapping of projections and views.

Here are the actual classes.

public abstract class ProjectionBase {
}

public abstract class ViewBase {
}

public abstract class QueryHandlerBase<TQuery, TProjection, TView> :
    HandlerBase<TQuery, TView>
    where TQuery : IRequest<TView>
    where TProjection : ProjectionBase, new()
    where TView : ViewBase {
    protected QueryHandlerBase(
        MyDbContext context,
        IMapper mapper) :
        base(context, mapper) {
    }

    protected override TView Handle(
        TQuery query) => GetView(query);

    protected virtual TProjection GetProjection(
        TQuery query) => new TProjection();

    protected virtual TView GetView(
        TQuery query) {
        var projection = GetProjection(query);
        var view = Mapper.Map<TView>(projection);

        Normalize(projection, view);

        return view;
    }

    protected virtual void Normalize(
        TProjection projection,
        TView view) {
    }
}

The QueryHandlerBase<TQuery, TProjection, TView> class inherits from HandlerBase<TRequest, TResponse> class because it contains the MyDbContext and IMapper properties. The HandlerBase class is also inherited by all command handlers for this same reason.

What About a Real Code Example?

So, how would we use the above QueryHandlerBase then? Let's pretend we have an admin system (built on ASP.NET Core) for a construction company and we need to add, edit and list jobs. Our Add, Edit and List slices would look like this:

Add

public sealed class Add {
    public sealed class Command :
        IRequest<bool> {
        public int? CsrId { get; set; }
        public string Name { get; set; }
        public int? StateId { get; set; }
        public int? StatusId { get; set; }
        public int? TypeId { get; set; }
    }

    public sealed class CommandHandler :
        HandlerBase<Command, int> {
        public CommandHandler(
            MyDbContext context,
            IMapper mapper) :
            base(context, mapper) {
        }

        protected override void Handle(
            Command command) {
            var job = Mapper.Map<Job>(command);

            Context.Add(job);
            Context.SaveChanges();

            return job.Id;
        }

        public sealed class Query :
            IRequest<View> {
        }

        public sealed class QueryHandler :
            QueryHandlerBase<Query, Projection, View> {
            public QueryHandler(
                MyDbContext context,
                IMapper mapper) :
                base(context, mapper) {
            }

            protected override Projection GetProjection(
                Query query) {
                var countries = Context.Countries.OrderByDescending(
                    c => c.Name).ProjectTo<SelectListGroup>(MapperConfig).Future();
                var projection = base.GetProjection(query);

                projection.CsrsSelectListItems = Context.Employees.Where(
                    e => e.IsActive).OrderBy(
                    e => e.Name).ProjectTo<SelectListItem>(MapperConfig).Future();
                projection.StatesSelectListItems = Context.States.Where(
                    s => s.IsActive).OrderBy(
                    s => s.Name).ProjectTo<SelectListItem>(MapperConfig, new {
                        countries
                    }).Future();
                projection.StatusesSelectListItems = Context.Statuses.Where(
                    s => s.IsActive).OrderBy(
                    s => s.Name).ProjectionTo<SelectListItem>(MapperConfig).Future();
                projection.TypesSelectListItems = Context.Types.Where(
                    t => t.IsActive).OrderBy(
                    t => t.Name).ProjectTo<SelectListItem>(MapperConfig).Future();

                return projection;
            }
        }

        public sealed class Projection :
            ProjectionBase {
            public QueryFutureEnumerable<SelectListItem> CsrsSelectListItems { get; set; }
            public QueryFutureEnumerable<SelectListItem> StatesSelectListItems { get; set; }
            public QueryFutureEnumerable<SelectListItem> StatusesSelectListItems { get; set; }
            public QueryFutureEnumerable<SelectListItem> TypesSelectListItems { get; set; }
        }

        public sealed class View :
            ViewBase {
            public IList<SelectListItem> CsrsSelectListItems { get; set; }
            public Command Job { get; } = new Command();
            public IList<SelectListItem> StatesSelectListItems { get; set; }
            public IList<SelectListItem> StatusesSelectListItems { get; set; }
            public IList<SelectListItem> TypesSelectListItems { get; set; }
        }
    }
}

Edit

public sealed class Edit {
    public sealed class Command :
        IRequest<bool> {
        public int? CsrId { get; set; }
        public int Id { get; set; }
        public string Name { get; set; }
        public int? StateId { get; set; }
        public int? StatusId { get; set; }
        public int? TypeId { get; set; }
    }

    public sealed class CommandHandler :
        HandlerBase<Command, bool> {
        public CommandHandler(
            MyDbContext context,
            IMapper mapper) :
            base(context, mapper) {
        }

        protected override void Handle(
            Command command) {
            var job = Context.Jobs.SingleOrDefault(
                j => j.Id == command.Id);

            if (job is null) {
                return false;
            }

            Mapper.Map(command, job);

            Context.SaveChanges();

            return true;
        }

        public sealed class Query :
            IRequest<View> {
            public int Id { get; set; }
        }

        public sealed class QueryHandler :
            QueryHandlerBase<Query, Projection, View> {
            public QueryHandler(
                MyDbContext context,
                IMapper mapper) :
                base(context, mapper) {
            }

            protected override Projection GetProjection(
                Query query) {
                var countries = Context.Countries.OrderByDescending(
                    c => c.Name).ProjectTo<SelectListGroup>(MapperConfig).Future();
                var projection = base.GetProjection(query);

                projection.CsrsSelectListItems = Context.Employees.Where(
                    e => e.IsActive).OrderBy(
                    e => e.Name).ProjectTo<SelectListItem>(MapperConfig).Future();
                projection.Job = Context.Jobs.Where(
                    j => j.Id == query.Id).ProjectTo<Command>(MapperConfig).DeferredSingle().FutureValue();
                projection.StatesSelectListItems = Context.States.Where(
                    s => s.IsActive).OrderBy(
                    s => s.Name).ProjectTo<SelectListItem>(MapperConfig, new {
                        countries
                    }).Future();
                projection.StatusesSelectListItems = Context.Statuses.Where(
                    s => s.IsActive).OrderBy(
                    s => s.Name).ProjectionTo<SelectListItem>(MapperConfig).Future();
                projection.TypesSelectListItems = Context.Types.Where(
                    t => t.IsActive).OrderBy(
                    t => t.Name).ProjectTo<SelectListItem>(MapperConfig).Future();

                return projection;
            }
        }

        public sealed class Projection :
            ProjectionBase {
            public QueryFutureEnumerable<SelectListItem> CsrsSelectListItems { get; set; }
            public QueryFutureValue<Command> Job { get; set; }
            public QueryFutureEnumerable<SelectListItem> StatesSelectListItems { get; set; }
            public QueryFutureEnumerable<SelectListItem> StatusesSelectListItems { get; set; }
            public QueryFutureEnumerable<SelectListItem> TypesSelectListItems { get; set; }
        }

        public sealed class View :
            ViewBase {
            public IList<SelectListItem> CsrsSelectListItems { get; set; }
            public Command Job { get; set; }
            public IList<SelectListItem> StatesSelectListItems { get; set; }
            public IList<SelectListItem> StatusesSelectListItems { get; set; }
            public IList<SelectListItem> TypesSelectListItems { get; set; }
        }
    }
}

List

public sealed class List {
    public sealed class Query :
        IRequest<View> {
    }

    public sealed class QueryHandler :
        QueryHandlerBase<Query, Projection, View> {
        public QueryHandler(
            MyDbContext context,
            IMapper mapper) :
            base(context, mapper) {
        }

        protected override Projection GetProjection(
            Query query) {
            var projection = base.GetProjection(query);

            projection.Jobs = Context.Jobs.OrderBy(
                j => j.Name).ProjectTo<JobProjectionView>(MapperConfig).Future();

            return projection;
        }
    }

    public sealed class Projection :
        ProjectionBase {
        public QueryFutureEnumerable<JobProjectionView> Jobs { get; set; }
    }

    public sealed class View :
        ViewBase {
        public IList<JobProjectionView> Jobs { get; set; }
    }

    public sealed class JobProjectionView {
        public string CsrName { get; set; }
        public int Id { get; set; }
        public string Name { get; set; }
        public string StateName { get; set; }
        public string StatusName { get; set; }
        public string TypeName { get; set; }
    }
}

The Add and Edit slices are quite similar but there are subtle differences that change the projection requirements. I don't use the Normalize() method in this example, but it exists for the few times when you have to manually intervene when mapping from a projection to a view. In the linked video I go into a bit more detail by taking a starter MVC app and morphing it to use the Projection-View pattern.

Concluding the Pattern

That pretty much covers the Projection-View pattern. All I've done is slightly expanded on Jimmy's Vertical Slices architecture and split the projections and views into different models. In my project at work, I have probably 200 or more slices using this pattern so I'm quite confident that it's proven itself. I do have a series planned in the near future where I demonstrate this pattern as well as other techniques in a more functional and realistic example.

I hope I've done a good job explaining the Projection-View pattern and how it can be beneficial to you. I would appreciate it if you watched the linked video and, if you think it was helpful, please leave a like on it. Thanks!

Source Code

The source code can be found here.

Website Font Optimization

When I was setting up my blog originally, I tried to optimize the font download. I only use three glyphs from my custom font so it was a waste of data to download the full font, which was 72KB, just for that. The solution was to create a font subset with only the glyphs I need.

At first I used a tool called FontForge to do that, but it was a very slow and manual process. I also messed up twice so I had to start over each time. When I finally finished I got a 4KB file and I left it at that.

Earlier this week I decided to try again, this time with some better tools. The first tool is called IcoMoon and it allows me to just export the glyphs I need just like the original tool I used, just much easier. But there was an issue, it doesn’t like TTF fonts. So, I used Everything Fonts to convert my TTF into an SVG font. I then imported it into IcoMoon and exported the glyphs I needed.

The resultant WOFF file was just 2KB, but I knew I could do better. I used Everything Fonts again to convert it to a WOFF2 file and that further reduced it to 1KB, which is pretty darn good coming from 72KB.

Lastly, I base64 encoded it and embedded it into my CSS. You don’t have to do this, but I just wanted to avoid an additional request for it. I may revisit this when I upgrade the site to HTTP/2. I decided to leave it in place even with HTTP/2.

API Client Design Patterns I Learned

Over the past couple of years as I've been making API clients to integrate with various services for my own needs, I settled on a design pattern that was inspired from the AWS SDK clients. Every operation is its own self contained unit that has an object describing the request. Depending on the complexity of the request there may be some helper methods for common operations. For example, using the Geocod.io client:

var response = await geocodio.GetGeocodeAsync(
    "1600 Pennsylvania Ave NW, Washington, DC 20500"
);

...and:

var response = await geocodio.GetGeocodeAsync(new GeocodeRequest {
    Address = "1600 Pennsylvania Ave NW, Washington, DC 20500"
});

...do the same request. In fact the first one calls into the second one. This way I can perform a quick simple call or if I need to I can build out a more complex request and pass it through.

In the Kraken.io client, I would probably use custom requests more often depending on my optimization needs because there's so many options that can be configured on how the image should be optimized. Creating a helper method for all those options is simply unpractical and a waste of time.

The same pattern can be applied to the client constructor as well, for example:

public AbcClient(
    string key);

...and:

public AbcClient(new AbcClientOptions {
    Key = "abc"
});

...would create the same instance. I haven't had to use constructor overloads like this yet because the APIs I've integrated with weren't complex enough to warrant it, but the possibility remains. I've yet to find an API where this pattern is not applicable.

Granted, people that spend their time writing API clients, or software really, probably know this already and this post would seem silly to them, but maybe you don't and I hope this helps you settle on a pattern for the API client you might be developing.

Google Maps API Implementation in C#

Google Maps probably doesn't need an introduction, but just in case, it is a mapping system from Google. It has a bunch of APIs to choose from. Just like with the other API implementations on here, I made a client for Google Maps as well.

I use Google Maps as a fallback if I can't get an accurate result out of Geocod.io at my work. As with the other clients, you need to pass in an instance of an HttpClient as well as your API key to a new instance of GoogleMapsClient.

Since I did release this as a NuGet package, I did fully implement the different APIs I wanted to integrate. The client currently implements the Distance Matrix, Elevation, Geocode, and TimeZone APIs. I don't do request validation within the client and instead defer to the responses to indicate if there's any errors. I am thinking about implementing validation in the future, just haven't settled on how to do it, and currently work is keeping me quite busy in real life.

Here's how to use the client. For a short example of the client in use, please take a peek at the linked video above.

var googleMaps = new GoogleMapsClient(
    httpClient,
    "{key}"
);

Get Geocode

var geocode = await googleMaps.GetGeocodeAsync(
    "1600 Pennsylvania Ave NW, Washington, DC 20500"
);

Get Reverse Geocoding

var reverseGeocode = await googleMaps.GetReverseGeocodeAsync(
    "38.897675,-77.036547"
);

Get Elevation

var elevation = await googleMaps.GetElevationAsync(
    "38.897675,-77.036547"
);

Get Time Zone

var timeZone = await googleMaps.GetTimeZoneAsync(
    "38.897675,-77.036547"
);

Get Distance Matrix

var distanceMatrix = await googleMaps.GetDistanceMatrixAsync(
    "East Capitol St NE & First St SE, Washington, DC 20004",
    "1600 Pennsylvania Ave NW, Washington, DC 20500"
);

HttpClient Hanging Inexplicably

As I was building the clients for the Carmine.io, Geocod.io, and Kraken.io APIs, I ran into an issue with posting to Kraken. The calls were just hanging without an error, besides the timeout exception that eventually triggers, but I don't count that. I spent hours scouring the Internet and StackOverflow trying to figure out why to no avail.

Eventually I stumbled onto the answer, and I'll be honest I'm not sure how I did, but I figured it would be a good thing to share. The problem ended up being in the nesting (of sorts) of the using statements for the contents and message. For example, from the Kraken source:

using (var content = new MultipartFormDataContent())
using (var stringContent = new StringContent(body))
using (var byteContent = new ByteArrayContent(request.FileBlob))
using (var message = await Client.PostAsync(request.Endpoint, content)) {
    content.Add(stringContent, "json");
    content.Add(byteContent, "file", request.FileName);

    return await message.Content.ReadAsStringAsync();
}

The above would trigger the issue and it came down to the message using statement being nested in with the other ones. I presume that since the POST was initiated before the content was established right below it caused it to hiccup and hang. The solution was to separate out the message into a new using statement, like:

using (var content = new MultipartFormDataContent())
using (var stringContent = new StringContent(body))
using (var byteContent = new ByteArrayContent(request.FileBlob)) {
	content.Add(stringContent, "json");
	content.Add(byteContent, "file", request.FileName);

	using (var message = await Client.PostAsync(request.Endpoint, content)) {
		return await message.Content.ReadAsStringAsync();
	}
}

Now everything works as expected. So, be careful with using statement nesting because it can throw off the expected order of operations and you might spend hours looking for a bug that didn't really exist, just like I did.

Kraken.io API Implementation in C#

Kraken.io is an image optimization service. I use it at work as part of an automated task to optimize photos taken on job sites. As I'm writing this, we've uploaded 3.24 million photos with a raw file size of 2.08 TB which were optimized down to 1.05 TB or about 50% saved. Those savings go a long way when storing, especially in the cloud.

There is an official Kraken.io client for .NET, but I didn't really feel comfortable with it. It just looked overly complex, so I opted for creating my own client. When we first started using Kraken.io I had to make it as a scheduled task in Salesforce so I had to make my own client regardless. Eventually I ported it out of Salesforce and into our system because the Salesforce limitations and the amount of hoops I had to jump through to make it work were ridiculous. Porting it to .NET and into our system allowed me to make the automated process do what it needed to without worrying about limits and constraints.

I've since decided to extract it out into a standalone project and make it available as a NuGet package. The NuGet package targets .NET Standard 1.3, and the source code is available on GitHub, here.

To use the client simply create an instance of KrakenClient and pass in an instance of HttpClient and your API access and secret keys. For a short example of how it works, take a look at the linked video above.

On a side note, I really wish Kraken would just use a single API key like Carmine.io, Geocod.io and Google Maps, to name a few. The access and secret keys don't really do anything other than getting passed into requests. A single key would accomplish the same job. If there was an actual purpose to them like calculating a request signature like the AWS APIs, then they'd make sense, but they don't so it's an area where the API could be simplified in the future.

Let's get back on topic.

var kraken = new KrakenClient(
    httpClient,
    "{accessKey}",
    "{secretKey}"
);

Optimize

var optimize = await kraken.GetOptimizeAsync(
    "{filePath}"
);

Optimize Wait

var optimizeWait = await kraken.GetOptimizeWaitAsync(
    "{filePath}"
);

Download

var download = await kraken.DownloadAsync(
    "{krakedUrl}"
);