.Net Core on Mac: Connecting to SQLServer

In the previous post I described how I set the basic development environment using Visual Studio Code. You cannot do much application development without a database though, and while there are many options for database connectivity, since this exercise is  about using a Microsoft development stack on a Mac, the database of choice is inevitably Sqlserver.

When I started thinking about all these, the only option I had was to install Sqlserver on my Mac in a virtual machine. And this is what I did. I installed Virtualbox with  Windows 10 LTBS and in it, I installed Sqlserver. I won’t go through this process as it is not Mac related. The point of interest is the network connectivity for the Virtualbox: in order to be able to talk to the Sqlserver inside it, one needs to use bridged networking.


Also, since we are going to connect to Sqlserver through the network, TCP connectivity must be enabled.


To test the connectivity you can use command line tools, a Mac client like Navicat Essentials for SQL Server or connect directly through the Visual Studio Code.

There is an extension for this:  mssql for Visual Studio Code

Like all extensions in VSC it adds a bunch of commands


The extension works from within the editor: you open a document and change the language mode to SQL.


Then you create a connection profile and connect to the Virtualbox Sqlserver. Upon a successful connection the footer of VSC changes to this:


And now the party begins.

In the opened document you type sql commands and execute them running the Execute query command. The results are fetched in another document and the screen splits in two: sql on the left, data on the right.

Screen Shot 2016-11-26 at 8.52.57 PM.png

From this point on, you have all the tools in place to dive into some real development.

Except that…

connecting to a Virtualbox hosted Sqlserver is not the less resource hungry solution.

After I set the above, Microsoft made a lot a good announcements in the Connect() event. Among them was the release of Sqlserver for Mac though Docker, which promises a lighter solution. The docker container runs Ubuntu linux, so there is no real Sqlserver for Mac. Just a better workaround. But I will leave this for a future post.

.Net Core on a Mac: Setting the development environment

Let’s begin from the beginning: I installed  .Net Core SDK ,  Visual Studio Code (VSC) and the C# extension. The tricky part was the SDK which uses OpenSSL . I had  to install it  beforehand with Homebrew.

At this point a basic development environment is place. But since I didn’t want to develop a CLI application but an Asp .Net MVC one, and since VSC does not provide project scaffolding like it’s big brother, Visual Studio, I had to install yeoman for this task (another cli tool) a task that, requires Node.js so that you end up with npm and finally run

npm install -g yo generator-aspnet bower

(Yes, it has to have bower too).

And now, everything is ready to start a project. I run:

yo aspnet

and got

     _-----_     ╭──────────────────────────╮
    |       |    │      Welcome to the      │
    |--(o)--|    │  marvellous ASP.NET Core │
   `---------´   │        generator!        │
    ( _´U`_ )    ╰──────────────────────────╯
    /___A___\   /
     |  ~  |     
 ´   `  |° ´ Y ` 

? What type of application do you want to create? (Use arrow keys)
❯ Empty Web Application 
  Empty Web Application (F#) 
  Console Application 
  Console Application (F#) 
  Web Application 
  Web Application Basic [without Membership and Authorization] 
  Web Application Basic [without Membership and Authorization] (F#)

I chose

 Web Application Basic [without Membership and Authorization]

and was good to go.

Or, was I?

Client side development encompasses tasks like building css from sass or less, bundling and minifying. I had to accommodate for these too. I decided for scss so I had to install sass.

gem install sass

(Has anyone been counting the package managers used so far? I will provide a count later).

And, per the .Net Core tutorials and documentation, I had to install gulp for sass compilation (and bundling/minification). Thank God, npm was already in place.

npm install --save-dev gulp

At this point I could open my newly created project (ok, the screenshot was taken later).


Database connectivity would have to wait a bit, until we get the basics straight.

The last piece I had to install was the C# extension. Yes, C# is not supported by default! You need to add it as an extension from within VSC.

VSC is mainly addressed to javascript developers, it seems.

So, to come here I have used the following package managers:

  • brew
  • npm
  • gem
  • bower
  • nuget (internally in VSC)

and two additional cli tools

  • yeoman
  • gulp

Unfortunately, after having done all the above, I found out that gulp will be discontinued in future releases (Bye, bye gulp).

And advancing a little bit with the configuration, I found also that the current project.json is going to be replaced by MSBuild (Bye, bye project.json).
Honestly, this gave me the creeps, not because I have any particular affection for either gulp or project.json but because it shows a fickleness of ‘heart’ towards the adopted affiliations. If one wants to adopt something new, the last thing he needs in uncertainty.

Having said that, it doesn’t seem to be a compromise on Microsoft’s newly developed commitment to openness, as, today, they announced joining the Linux foundation and they released Visual Studio for Mac (preview).


It’s been now quite a few days that I have been working with the current environment and apart from some annoyances that I will list below, I am rather happy, mostly because VSC is not just an editor. It has a lot of IDE capabilities, something that I have been missing to other lighter editors, or found too cumbersome to work with.

And since Intellisense is one of my main reasons for satisfaction, it is its shortfalls that frustrate me the most:

  • Version management in project.json is messy. Intellisense suggestions sometime are wrong (I got hints for version 2.0.0 and 3.0.0 where the package is still in 1.x.x), other times they do not show up at all.
  • Enabling Visual Studio Code Taghelpers did not help. Taghelpers Intellisense does not work. I posted a relevant question in Stackoverflow which, to the moment of writing, remains unanswered.
  • After correcting some misprints or wrong references in the code, there are artifacts left behind (red squiggly lines, underlining the problem that does not exist anymore). They go away with the first compilation though.

But with the current environment I have done a lot of progress in two areas: after creating the basic views and controllers, I spent a lot of time in route configuration and localization, which, I remind to those that haven’t read my previous post, is to migrate the company website from WordPress to Asp Net MVC.

.Net Core on a Mac

It’s been ages since I blogged anything. More, anything technical. Since I am in the process of experimenting with ASP .NET Core on my Mac, I thought to take the opportunity and log this journey here.

So far I have done three things:

This isn’t as straightforward as just installing Visual Studio Code. To have scaffolding one  needs to rely on CLI tools, and to do some client side development on the usual suspects: bower, jQuery, bootstrap etc. Which means you need to spend a lot of time  with the Terminal.

  • Set up a development database

While one can experiment with SQLite or MySQL, I wanted the real Microsoft thing, SQL Server, and since this isn’t available for Mac I used Virtual Box with a Windows 10 LTSB guest, where I installed SQL Server Express.

To connect to the database from the host, the VirtualBox has to be on bridged networking and SQL Server should be accepting TCP connections.

  • Found a relatively simple project that entails the most common workflows.

Our company’s website  is multilingual and it is WordPress based (no wonder). While the blog  parts serve their purpose nicely, the pages are bloated (HTML-wise) and have a lot of javascript code running (for a reason) which could benefit from a slimming diet.

So, I thought, why not try to migrate the WordPress pages (not the posts) to an MVC site based on Asp .Net Core. To make things more interesting, I want to add some dynamic content too, pulled from our app’s database (why should I be bothering with SQL Server if I didn’t?).

And here I am. So far, I have made some progress which I will relate in subsequent posts. This post is only an introduction to the theme. If you have interest in such experiments, stay tuned.

Bookmeta: a needed update

Yesterday, while I was trying to show a friend the calibre plugin I have created to extract and retrive Greek Book metadata, to my surprise, I saw that it was not returning much. Obviously the biblionet page had changed and the script the plugin was relying upon for the metadata retrieval needed an update.

Fortunately, it was just a couple of hours work, the new version is available and functioning. Enjoy

Responsive Images: the solutions so far and a mixed new one

I read the other day this fine article by Mat Marquis about his experiences, searches and conclusion on the issue of responsive images. It sparked my interest to look at the subject a bit more thoroughly.

What are the responsive images? Or, better, what they should be? For a more elaborate explanation read Mat’s article. For me, it suffices to say this: an image is considered responsive when it adapts both to the size of a viewport as well as to the bandwidth of a device. Usually, these two go hand in hand: the smaller the device, the higher the probability that is connected to a slow network (like our mobile phones).

It’s this second requirement (bandwidth) that makes the issue of responsive images complex. Because there is no ubiquitously  accepted methodology or technology to measure the relative abundance or scarcity of this resource.

So, the rule here is simple to understand, yet difficult to implement: the less bandwidth we have at our disposal, the smaller the file size of an image should be.

This rule, though, is meaningless, if taken in isolation. A desktop computer with a huge screen and a sluggish network connection does NOT need small file size images, as this can impact adversely the quality of a web page rendered to its full extent. We should be talking about smaller file sizes in conjunction with the requirement for smaller image dimensions.

Enough said about theory. What are the proposed solutions to the problem?

I have traced four kinds of solutions:

  • CSS based
  • Script based
  • Server hacks
  • A combination of two or more of the above.

Surprisingly, there is no pure HTML based solution. And this is what Mat pinpoints in the aforementioned article, as well as what he considers the road ahead.

Here is an example that highlights  how html should be according to Mat:

   <source src="high-res.jpg" media="min-width: 800px" />
   <source src="mobile.jpg" />
   <!-- Fallback content: -->
   <img src="mobile.jpg" />

What do we have here? A proposal for HTML5 to treat the image tag much like the video or audio tag, along with the subordinate source tags and their  media attributes that allow us to load different images utilizing media queries.
It’s a very elegant solution with two drawbacks:

  • If we ever come to the point where bandwidth is not an issue, the solution will become irrelevant: the current img tag with the resizing it allows, is less verbose.
  • It’s a solution currently out of our control.

Let’s now take a closer look to the existing solutions.

CSS Based Solutions

There is no way to set the source attribute of an image through CSS so this approach relies on a trick: use a substitute for the img tag that can be set through CSS. The handy one is the background-image attribute for block elements. Media queries are used to determine which image to assign to this attribute. For example:

@media screen and (max-width: 480px) {
 div.someimage {
  background-image: url(small.jpg);
  background-size: auto;
@media screen and (min-width: 480px) {
 div.someimage {
  background-image: url(big.jpg);
  background-size: auto;

You would probably need some more rules here to make it work in a real case, but this is for demonstration purposes only. When the page loads, the media queries determine which part of the stylesheet is applicable, and this in turn, determines which image to fetch.
The problem with this solution is that it is not semantically correct and that it alters the behavior the user expects from the images (i.e. can’t right click and download). Let alone that it poses a burden to the web developer and the user that will create future content.

Script Solutions
The information about the two or more image files needed could resize inside an image tag with the use of data attributes : <img src="small.img" data-bigimage="big.jpg" />
Once the dom has finished loading, a small script can be put to work to a. determine is there is a need for a bigger image and b. if yes, substitute the value of the src attribute with the value of the data-bigimage attribute.
This solution is not optimal since a desktop computer will have to load two images (small.jpg first and big.jpg later) while only one is needed.
To save bandwidth and speed up things the image tag could come without a value :
<img src="small.img" data-smallimage="small.jpg" data-bigimage="big.jpg" />
With browser detection or media queries we determine which one of the data attributes fits our purpose and substitute the image source attribute with it. Then only the desired image is loaded.
But this solutions fails when the browser does not support scripting, or the user disables it, or when, for some reason, the script stops executing before reaching this point.

      if(window.screen.width > 480){
          this.src = $(this).attr('dataset['bigimage']');

Server based solutions
If the server has a means of knowing upfront what kind of device it will be servicing, it can determine also what kind of images to send. Device recognition is a shaky issue, mostly because it relies on information passed from the browser to the server, something that can be altered or forged. But, assuming we have it, then the method would work as follows:
Image tags source is set to a high resolution image.
If the server detects a small device, then a script kicks in to resize the image, serve it and cache it for future use.
The benefit of this approach is that it requires no changes to the HTML. If device recognition fails, then a big image will be sent to the device which might be difficult to load but the page won’t break. (For more info on this approach look at adaptive images).

Mixed Solutions

Javascript is the extra ingredient most often needed in conjunction with another approach. So, for instance, in the server based solutions mentioned above, one could determine the device dimensions through a cookie set by javascript on the page HEAD tag.

document.cookie='resolution='+Math.max(screen.width,screen.height)+'; path=/';

A new(?) approach

If I were to choose one of the above solutions, I would go for the CSS one. This is both a matter of personal preference as well as because media queries is a really handy and unobtrusive way to determine device.

So to mend the shortcomings of the solution presented above I would augment it with the help of javascript. I would let the browser determine the device through media queries and load the images as background images of div elements and then run a script to change these elements to proper images. To determine which containers’ background images should be ‘translated’ to proper images, I would use an distinct class (‘.responsive’ in the example below).

        var imgsrc = $(this).css('background-image');
        imgsrc = imgsrc.substr(4, -5+imgsrc.length);
<img src="&quot; + imgsrc + &quot;" alt="" />

The above are not meant to serve as a tutorial of some sort, neither as a comprehensive survey of the solutions proposed. Writing a blog post has always been to me a way to put some order in my thoughts and clarify obscure issues through the valuable feedback a post attracts. And this is precisely what this post serves.
The responsive images problem is an open problem. The solution to pick should be the one that fits mostly to your type of application and the one the diminishes the shortcomings in each case.

I call it “Relief”

IE6 and IE7 in between 10 and 20%? I call it “relief”. And I also call it, “not fast enough”.

Still, IE6 being used more than IE7 is a kind of a perversion.

The End of an Era: Internet Explorer Drops Below 50 Percent of Web Usage | Webmonkey | Wired.com.

The Android Déjà vu

Since it’s inception, Apple meant to offer a combination of hardware and software to the consumer. Back in the ’70s this wasn’t really a novelty. Such was the paradigm of the Computer Industry in general. One needs only to think of IBM as a testimony to this claim.

And then, in the beginning of the ’80s, came an innovator: Microsoft.

Innovator in the sense that it pioneered the business of being a software company that sells primarily operating systems. Because, otherwise, neither OS it sold (the MS-DOS)  was really novel, not the company itself. As a matter of fact, Microsoft is a bit older than Apple. But Bill Gates and Co were the right time in the right place to close a deal (with IBM) that would change their fortunes as well as the whole computer industry.

The decoupling of the Operating System from the Hardware and the widespread copying of the IBM personal computer, led to the boom of the PC industry: hundreds of manufacturers produced cheap clones of the original IBM machine, eroding its dominant position and swallowing its market share. This unprecedented expansion was not matched by a relevant expansion of OS offerings though. Microsoft became the king of the game.

The situation remained practically unchanged for 25 years until, in the middle of the ’00s, Apple, aided by the success of its ipod and itunes, started gaining market share again. The one stop shop approach started showing strength again and this trend, as far as personal computers are concerned,  is still unfolding.

In 2007 enters the iPhone, a mobile phone with HW and SW from the same source: Apple. As with the original Apple computers, ipPhone made significant inroads in the Smartphone market. Soon it became its  driving force and certainly the fastest growing, most profitable and most discussed product.

Android, much like MS-DOS compared to Apple, comes later. Much like MS-DOS too,  it’s coming from a vendor (Google) that does not sell hardware. Much like MS-DOS it helps manufactures around the globe to produce better and cheaper smartphones. And much like MS-DOS (or Windows) suffers from bugs and instabilities and lacks in the user experience it offers compared to the iPhone operating system, the iOS.

But it doesn’t matter.

On it’s way to  becoming the main smartphone operating system  (if it’s not already there) it’s becoming better. And it challenges the wisdom of buying hardware and software from the same source afresh.

If we project these parallels into the future, we will expect to see a marginalization of the iPhone and its latter, much latter, shiny come-back with a vengeance.

But Steve Jobs is not around this time. And this makes things less predictable.

Web things I came across lately

I have been doing a lot of reading and searching about web development issues lately. Actually it’s a catch up exercise for things that I should have followed but missed.

So here is a list of a few projects that I was really pleased to discover.

  1. CSSLint For those that would like to have some advise and an overlooking tutor in their CSS projects, here is a service that does exactly this: analyzes css files and offers ‘advise’ about usual pitfalls and patterns to be avoided.  CSSLint is a project of  Nicholas C. Zakas and Nicole Sullivan.
  2. 960.gs As you can guess by the name, it’s a grid system for 12 or 16 column grids. I have never used a grid system before. I prefer to hack my way to the design from scratch. But the simplicity of the thing is tempting, at least for prototyping work. 906.gr is a project of  Nathan Smith.
  3. Handlebars Littering javascript with a lot of html is an anti-pattern. The solution? Use some sort of javascript templating mechanism such as Handlebars.  Simple and elegant. A brain child of Yehuda Katz.
  4. Backbone.js A javascript Model View Controller (MVC)  … backbone for web applications. Developed by the guys responsible for DocumentCloud.
Now they have to come together in a future project of mine 🙂