Blog

You are browsing the new, beta version of my website. Some things may not work properly. If you spot any problems, please file an issue!
38 posts 2009 16 posts 2010 50 posts 2011 28 posts 2012 15 posts 2013 7 posts 2014 10 posts 2015 5 posts 2016 4 posts 2017 7 posts 2018 2 posts 2019 17 posts 2020 7 posts 2021 7 posts 2022 9 posts 2023

Never forget type="button" on generated buttons!

3 min read 0 comments

I just dealt with one of the weirdest bugs and thought you may find it amusing too.

In one of my slides for my upcoming talk “Even More CSS Secrets”, I had a Mavo app on a <form>, and the app included a collection to quickly create a UI to manage pairs of values for something I wanted to calculate in one of my live demos. A Mavo collection is a repeatable HTML element with affordances to add items, delete items, move items etc. Many of these affordances are implemented via <button> elements generated by Mavo.

Normally, hitting Enter inside a text field within a collection adds a new item, as one would expect. However, I noticed that when I hit Enter inside any item, not only no item was added, but an item was being deleted, with the usual “Item deleted [Undo]” UI and everything!

At first I thought it was a bug with the part of Mavo code that adds items on Enter and deletes empty items on backspace, so I commented that out. Nope, still happening. I was already very puzzled, since I couldn’t remember any other part of the codebase that deletes items in response to keyboard events.

So, I added breakpoints on the delete(item) method of Mavo.Collection to inspect the call stack and see how execution got there. Turned out, it got there via a normal …click event on the actual delete button! What fresh hell was this? I never clicked any delete button!

And then it dawned on me: <button> elements with no type attribute set are submit buttons by default! Quote from spec: The missing value default and invalid value default are the Submit Button state.. This makes no difference in most cases, UNLESS you’re inside a form. The delete button of the first item had been turned into the de facto default submit button just because it was the first button in that form and it had no type!

I also remembered that regardless of how you submit a form (e.g. by hitting Enter on a single-line text field) it also fires a click event on the default submit button, because people often listen to that instead of the form’s submit event. Ironically, I was cancelling the form’s submit event in my code, but it still generated that fake click event, making it even harder to track down as no form submission was actually happening.

The solution was of course to go through every part of the Mavo code that generates buttons and add type=“button” to them. I would recommend this to everyone who is writing libraries that will operate in unfamiliar HTML code. Most of the time a type-less <button> will work just fine, but when it doesn’t, things get really weird.


Responsive tables, revisited

3 min read 0 comments

Screenshot showing a table with 3 rows turning into 3 sets of key-value pairs

Many people have explored responsive tables. The usual idea is turning the table into key-value pairs so that cells become rows and there are only 2 columns total, which fit in any screen. However, this means table headers need to now be repeated for every row. The current ways to do that are:

  • Duplicating content in CSS or via a data-* attribute, using generated content to insert it before every row.
  • Using a definition list which naturally has duplicated <dt>s, displaying it as a table in larger screens.

A few techniques that go in an entirely different direction are:

  • Hiding non-essential columns in smaller screens
  • Showing a thumbnail of the table instead, and display the full table on click
  • Displaying a graph in smaller screens (e.g. a pie chart)

I think the key-value display is probably best because it works for any kind of table, and provides the same information. So I wondered, is there any way to create it without duplicating content either in the markup or in the CSS? After a bit of thinking, I came up with two ways, each with their own pros and cons.

Both techniques are very similar: They set table elements to display: block; so that they behave like normal elements and duplicate the <thead> contents in two different ways:

  1. Using text-shadow and creating one shadow for each row
  2. Using the element() function to duplicate the entire thead, styles and all.

Each method has its own pros and cons, but the following pros and cons apply to both:

  • Pros: Works with normal table markup
  • Cons:
    • All but the first set of headers are unselectable (since neither shadows nor element()-generated images are real text). However, keep in mind that the techniques based on generated content also have this problem — and for all rows. Also, that the markup screen readers see is the same as a normal table. However, it’s still a pretty serious flaw and makes this a hack. I’m looking forward to seeing more viable solutions.
    • Only works if none of the table cells wrap, since it depends on table cells being aligned with their headers.

Using text-shadow to copy text to other rows

  • Additional Pros: Works in every browser
  • Additional Cons: Max Number of rows needs to be hardcoded in the CSS, since each row needs another text shadow on <thead>. However, you can specify more shadows than needed, since overflow: hidden on the table prevents extra ones from showing up. Also, number of columns needs to be specified in the CSS (the --cols variable).

Demo

Using element() to copy the entire <thead> to other rows

  • Additional Cons: element() is currently only supported in Firefox :(

Demo


Quicker Storify export

3 min read 0 comments

If you’ve used Storify, you probably know by now it’s closing down soon. They have an FAQ up to help people with the transition which explains that to export your content you need to…

  1. Log in to Storify at www.storify.com.
  2. Mouse over the story that contains content you would like to export and select “View.”
  3. Click on the ellipses icon and select “Export.”
  4. Choose your preferred format for download.
  5. To save your content and linked assets in HTML, select - File > Save as > Web Page, Complete. To export your content to PDF, select Export to HTML > File > Print > Save as PDF.
  6. Repeat the process for each story whose content you would like to preserve.

So I started doing that. I wasn’t sure if JSON or HTML would be more useful to me, so I was exporting both. It was painful. Each export required 3 page loads, and they were slow. After 5 stories, I started wondering if there’s a quicker way. I’m a programmer after all, my job is to automate things. However, I also didn’t want to spend too long on that, since I only had 40 stories, so the effort should definitely not be longer than it would have taken to manually export the remaining 35 stories.

I noticed that the HTML and JSON URLs for each story could actually be recreated by using the slug of the Story URL:

https://storify.com/LeaVerou/**css-variables-var-subtitle-cssconf-asia**.html https://api.storify.com/v1/stories/LeaVerou/**css-variables-var-subtitle-cssconf-asia**

The bold part is the only thing that changes. I tried that with a different slug and it worked just fine. Bingo! So I could write a quick console script to get all these URLs and open them in separate tabs and then all I have to do is go through each tab and hit Cmd + S to save. It’s not perfect, but it took minutes to write and saved A LOT of time.

Following is the script I wrote. Go to your profile page, click “Show more” and scroll until all your stories are visible, then paste it into the console. You will probably need to do it twice: once to disable popup blocking because the browser rightfully freaks out when you try to open this many tabs from script, and once to actually open all of them.

var slugs = [... new Set($$(".story-tile").map(e => e.dataset.path))]
slugs.forEach(s => { open(`https://api.storify.com/v1/stories/${s}`); open(`https://storify.com/${s}.html`) })

This gets a list of all unique (hence the [...new Set(array)]) slugs and opens both the JSON and HTML export URLs in new tabs. Then you can go through each tab and save.

You will notice that the browser becomes REALLY SLOW when you open this many tabs (in my case 41 stories × 2 tabs each = 82 tabs!) so you may want to do it in steps, by using array.slice(). Also, if you don’t want to save the HTML version, the whole process becomes much faster, the HTML pages took AGES to load and kept freezing the browser.

Hope this helps!

PS: If you’re content with your data being held hostage by a different company, you could also use this tool by Wakelet. I’ve done that too, but I also wanted to own my data as well.


Free Intro to Web Development slides (with demos)

3 min read 0 comments

This semester I’m teaching 6.813 User Interface Design and Implementation at MIT, as an instructor.

Many of the assignments of this course include Web development and the course included two 2-hour labs to introduce students to these technologies. Since I’m involved this year, I decided to make new labs from scratch and increase the number of labs from 2 to 3. Even so, trying to decide what to include and what not to from the entirety of web development in only 6 hours was really hard, and I still feel I failed to include important bits.

Since many people asked me for the slides on Twitter, I decided to share them. You will find my slides here and an outline of what is covered is here. These slides were also the supporting material the students had on their own laptops and often they had to do exercises in them.

The audience for these slides is beginners in Web development but technical otherwise — people who understand OOP, trees, data structures and have experience in at least one C-like programming language.

Some demos will not make sense as they were live coded, but I included notes (top right or bottom left corner) about what was explained in each part.

Use the arrow keys to navigate. It is also quite big, so do not open this on a phone or on a data plan.

If the “Open in new Tab” button opens a tab which then closes immediately, disable Adblock.

From some quick testing, they seem to work in Firefox and Safari, but in class we were using an updated version of Chrome (since we were talking about developer tools, we needed to all have the same UI), so that’s the browser I’d recommend since they were tested much more there.

I’m sharing them as-is in case someone else finds them useful. Please do not bug me if they don’t work in your setup, or if you do not find them useful or whatever**.** If they don’t tickle your fancy, move on. I cannot provide any support or fixes. If you want to help fix the issue, you can submit a pull request, but be warned: most of the code was written under extreme time pressure (I had to produce this 6 times as fast as I usually need to make talks), so is not my finest moment.

If you want to use them to teach other people that’s fine as long as it’s a non-profit event.

[gallery columns=“2” size=“medium” ids=“2756,2757,2755,2754”]


Different remote and local resource URLs, with Service Workers!

6 min read 0 comments

I often run into this issue where I want a different URL remotely and a different one locally so I can test my local changes to a library. Sure, relative URLs work a lot of the time, but are often not an option. Developing Mavo is yet another example of this: since Mavo is in a separate repo from mavo.io (its website) as well as test.mavo.io (the testsuite), I can’t just have relative URLs to it that also work remotely. I’ve been encountering this problem way too frequently pretty much since I started in web development. In this post, will describe all solutions and workarounds I’ve used over time for this, including the one I’m currently using for Mavo: Service Workers!

The manual one

Probably the first solution everyone tries is doing it manually: every time you need to test, you just change the URL to a relative, local one and try to remember to change it back before committing. I still use this in some cases, since us developers are a lazy bunch. Usually I have both and use my editor’s (un)commenting shortcut for enabling one or the other:

<script src="https://get.mavo.io/mavo.js"></script>
<!--<script src="../mavo/dist/mavo.js"></script>-->

However, as you might imagine, this approach has several problems, the worst of which is that more than once I forgot and committed with the active script being the local one, which resulted in the remote website totally breaking. Also, it’s clunky, especially when it’s two resources whose URLs you need to change.

The JS one

This idea uses a bit of JS to load the remote URL when the local one fails to load.

<script src="http://localhost:8000/mavo/dist/mavo.js" onerror="this.src='https://get.mavo.io/mavo.js'"></script>

This works, and doesn’t introduce any cognitive overhead for the developer, but the obvious drawback is that it slows things down for the server since a request needs to be sent and fail before the real resource can be loaded. Slowing things down for the local case might be acceptable, even though undesirable, but slowing things down on the remote website for the sake of debugging is completely unacceptable. Furthermore, this exposes the debugging URLs in the HTML source, which gives me a bit of a knee jerk reaction.

A variation of this approach that doesn’t have the performance problem is:

<script>
{
 let host = location.hostname == "localhost"? 'http://localhost:8000/dist' : 'https://get.mavo.io';
 document.write(`<script src="${host}/mavo.js"></scr` + `ipt>`);
}
</script>

This works fine, but it’s very clunky, especially if you have to do this multiple times (e.g. on multiple testing files or demos).

The build tools one

The solution I was following up to a few months ago was to use gulp to copy over the files needed, and then link to my local copies via a relative URL. I would also have a gulp.watch() that monitors changes to the original files and copies them over again:

gulp.task("copy", function() {
	gulp.src(["../mavo/dist/**/*"])
		.pipe(gulp.dest("mavo"));
});

gulp.task("watch", function() { gulp.watch(["…/mavo/dist/*"], ["copy"]); });

This worked but I had to remember to run gulp watch every time I started working on each project. Often I forgot, which was a huge source of confusion as to why my changes had no effect. Also, it meant I had copies of Mavo lying around on every repo that uses it and had to manually update them by running gulp, which was suboptimal.

The Service Worker one

In April, after being fed up with having to deal with this problem for over a decade, I posted a tweet:

@MylesBorins replied (though his tweet seems to have disappeared) and suggested that perhaps Service Workers could help. In case you’ve been hiding under a rock for the past couple of years, Service Workers are a new(ish) API that allows you to intercept requests from your website to the network and do whatever you want with them. They are mostly promoted for creating good offline experiences, though they can do a lot more.

I was looking for an excuse to dabble in Service Workers for a while, and this was a great one. Furthermore, browser support doesn’t really matter in this case because the Service Worker is only used locally.

The code I ended up with looks like this in a small script called sitewide.js, which, as you may imagine, is used sitewide:

(function() {

if (location.hostname !== "localhost") { return; }

if (!self.document) { // We’re in a service worker! Oh man, we’re living in the future! ?? self.addEventListener("fetch", function(evt) { var url = evt.request.url;

if (url.indexOf("get.mavo.io/mavo.") > -1 || url.indexOf("dev.mavo.io/dist/mavo.") > -1) { var newURL = url.replace(/.+?(get|dev).mavo.io/(dist/)?/, "http://localhost:8000/dist/&quot;) + "?" + Date.now();

var response = fetch(new Request(newURL), evt.request) .then(r => r.status < 400? r : Promise.reject()) // if that fails, return original request .catch(err => fetch(evt.request));

evt.respondWith(response); } });

return; }

if ("serviceWorker" in navigator) { // Register this script as a service worker addEventListener("load", function() { navigator.serviceWorker.register("sitewide.js"); }); }

})();

So far, this has worked more nicely than any of the aforementioned solutions and allows me to just use the normal remote URLs in my HTML. However, it’s not without its own caveats:

  • Service Workers are only activated on a cached pageload, so the first one uses the remote URL. This is almost never a problem locally anyway though, so I’m not concerned about it much.
  • The same origin restriction that service workers have is fairly annoying. So, I have to copy the service worker script on every repo I want to use this on, I cannot just link to it.
  • It needs to be explained to new contributors since most aren’t familiar with Service Workers and how they work at all.

Currently the URLs for both local and remote are baked into the code, but it’s easy to imagine a mini-library that takes care of it as long as you include the local URL as a parameter (e.g. https://get.mavo.io/mavo.js?local=http://localhost:8000/dist/mavo.js).

Other solutions

Solutions I didn’t test (but you may want to) include:

  • .htaccess redirect based on domain, suggested by @codepo8. I don’t use Apache locally, so that’s of no use to me.
  • Symbolic links, suggested by @aleschmidx
  • User scripts (e.g. Greasemonkey), suggested by @WebManWlkg
  • Modifying the hosts file, suggested by @LukeBrowell (that works if you don’t need access to the remote URL at all)

Is there any other solution? What do you do?


Introducing Mavo: Create web apps entirely by writing HTML!

2 min read 0 comments

Today I finally released the project I’ve been working on for the last two years at MIT CSAIL: An HTML-based language for creating (many kinds of) web applications without programming or a server backend. It’s named Mavo after my late mother (Maria Verou), and is Open Source of course (yes, getting paid to work on open source is exactly as fun as it sounds).

It was the scariest release of my life, and have been postponing it for months. I kept feeling Mavo was not quite there yet, maybe I should add this one feature first, oh and this other one, oh and we can’t release without this one, surely! Eventually I realized that what I was doing had more to do with postponing the anxiety and less to do with Mavo reaching a stage where it can be released. After all, “if you’re not at least a bit embarrassed by what you release, you waited too long”, right?

The infamous Ship It Squrrel

So, there it is, I hope you find it useful. Read the post on Smashing Magazine or just head straight to mavo.io, read the docs, and play with the demos!

And do let me know what you make with it, no matter how small and trivial you may think it is, I would love to see it!


HTML APIs: What they are and how to design a good one

1 min read 0 comments

I’m a strong believer in lowering the barrier of what it takes to create rich, interactive experiences and improving the user experience of programming. I wrote an article over at Smashing Magazine aimed at JavaScript library developers that want their libraries to be usable via HTML (i.e. without writing any JavaScript). Sounds interesting? Read it here.


Duoload: Simplest website load comparison tool, ever

1 min read 0 comments

Today I needed a quick tool to compare the loading progression (not just loading time, but also incremental rendering) of two websites, one remote and one in my localhost. Just have them side by side and see how they load relative to each other. Maybe even record the result on video and study it afterwards. That’s all. No special features, no analysis, no stats.

So I did what I always do when I need help finding a tool, I asked Twitter:

Most suggested complicated tools, some non-free and most unlikely to work on local URLs. I thought damn, what I need is a very simple thing! I could code this in 5 minutes! So I did and here it is, in case someone else finds it useful! The (minuscule amount of) code is of course on Github.

Duoload

Of course it goes without saying that this is probably a bit inaccurate. Do not use it for mission-critical performance comparisons.

Credits for the name Duoload to Chris Lilley who came up with it in the 1 minute deadline I gave him :P


Resolve Promises externally with this one weird trick

4 min read 0 comments

Those of us who use promises heavily, have often wished there was a Promise.prototype.resolve() method, that would force an existing Promise to resolve. However, for architectural reasons (throw safety), there is no such thing and probably never will be. Therefore, a Promise can only resolve or reject by calling the respective methods in its constructor:

var promise = new Promise((resolve, reject) => {
	if (something) {
		resolve();
	}
	else {
		reject();
	}
});

However, often it is not desirable to put your entire code inside a Promise constructor so you could resolve or reject it at any point. In my latest case today, I wanted a Promise that resolved when a tree was created, so that third-party components could defer code execution until the tree was ready. However, given that plugins could be running on any hook, that meant wrapping a ton of code with the Promise constructor, which was obviously a no-go. I had come across this problem before and usually gave up and created a Promise around all the necessary code. However, this time my aversion to what this would produce got me to think even harder. What could I do to call resolve() asynchronously from outside the Promise?

A custom event? Nah, too slow for my purposes, why involve the DOM when it’s not needed?

Another Promise? Nah, that just transfers the problem.

An setInterval to repeatedly check if the tree is created? OMG, I can’t believe you just thought that Lea, ewwww, gross!

Getters and setters? Hmmm, maybe that could work! If the setter is inside the Promise constructor, then I can resolve the Promise by just setting a property!

My first iteration looked like this:

this.treeBuilt = new Promise((resolve, reject) => {
	Object.defineProperty(this, "_treeBuilt", {
		set: value => {
			if (value) {
				resolve();
			}
		}
	});
});

// Many, many lines below…

this._treeBuilt = true;

However, it really bothered me that I had to define 2 properties when I only needed one. I could of course do some cleanup and delete them after the promise is resolved, but the fact that at some point in time these useless properties existed will still haunt me, and I’m sure the more OCD-prone of you know exactly what I mean. Can I do it with just one property? Turns out I can!

The main idea is realizing that the getter and the setter could be doing completely unrelated tasks. In this case, setting the property would resolve the promise and reading its value would return the promise:

var setter;
var promise = new Promise((resolve, reject) => {
	setter = value => {
		if (value) {
			resolve();
		}
	};
});

Object.defineProperty(this, "treeBuilt", { set: setter, get: () => promise });

// Many, many lines below…

this.treeBuilt = true;

For better performance, once the promise is resolved you could even delete the dynamic property and replace it with a normal property that just points to the promise, but be careful because in that case, any future attempts to resolve the promise by setting the property will make you lose your reference to it!

I still think the code looks a bit ugly, so if you can think a more elegant solution, I’m all ears (well, eyes really)!

Update: Joseph Silber gave an interesting solution on twitter:

function defer() {
	var deferred = {
		promise: null,
		resolve: null,
		reject: null
	};

deferred.promise = new Promise((resolve, reject) => { deferred.resolve = resolve; deferred.reject = reject; });

return deferred; }

this.treeBuilt = defer();

// Many, many lines below…

this.treeBuilt.resolve();

I love that this is reusable, and calling resolve() makes a lot more sense than setting something to true. However, I didn’t like that it involved a separate object (deferred) and that people using the treeBuilt property would not be able to call .then() directly on it, so I simplified it a bit to only use one Promise object:

function defer() {
	var res, rej;

var promise = new Promise((resolve, reject) => { res = resolve; rej = reject; });

promise.resolve = res; promise.reject = rej;

return promise; }

this.treeBuilt = defer();

// Many, many lines below…

this.treeBuilt.resolve();

Finally, something I like!


URL rewriting with Github Pages

2 min read 0 comments

redirectI adore Github Pages. I use them for everything I can, and try to avoid server-side code like the plague, exactly so that I can use them. The convenience of pushing to a repo and having the changes immediately reflected on the website with no commit hooks or any additional setup, is awesome. The free price tag is even more awesome. So, when the time came to publish my book, naturally, I wanted the companion website to be on Github Pages.

There was only one small problem: I wanted nice URLs, like http://play.csssecrets.io/pie-animated, which would redirect to demos on dabblet.com. Any sane person would have likely bitten the bullet and used some kind of server-side language. However, I’m not a particularly sane person :D

Turns out Github uses some URL rewriting of its own on Github Pages: If you provide a 404.html, any URL that doesn’t exist will be handled by that. Wait a second, is that basically how we do nice URLs on the server anyway? We can do the same in Github Pages, by just running JS inside 404.html!

So, I created a JSON file with all demo ids and their dabblet URLs, a 404.html that shows either a redirection or an error (JS decides which one) and a tiny bit of Vanilla JS that reads the current URL, fetches the JSON file, and redirects to the right dabblet. Here it is, without the helpers:

(function(){

document.body.className = ‘redirecting’;

var slug = location.pathname.slice(1);

xhr({ src: ‘secrets.json’, onsuccess: function () { var slugs = JSON.parse(this.responseText);

var hash = slugs[slug];

if (hash) { // Redirect var url = hash.indexOf(‘http’) == 0? hash : ‘http://dabblet.com/gist/’ + hash; $(‘section.redirecting > p’).innerHTML = ‘Redirecting to <a href="’ + url + ‘">’ + url + ‘</a>…’; location.href = url; } else { document.body.className = ‘error not-found’; } }, onerror: function () { document.body.className = ‘error json’; } });

})();

That’s all! You can imagine using the same trick to redirect to other HTML pages in the same Github Pages site, have proper URLs for a single page site, and all sorts of things! Is it a hack? Of course. But when did that ever stop us? :D


Autoprefixing, with CSS variables!

2 min read 0 comments

Recently, when I was making the minisite for markapp.io, I realized a neat trick one can do with CSS variables, precisely due to their dynamic nature. Let’s say you want to use a property that has multiple versions: an unprefixed one and one or more prefixed ones. In this example we are going to use clip-path, which currently needs both an unprefixed version and a -webkit- prefixed one, however the technique works for any property and any number of prefixes or different property names, as long as the value is the same across all variations of the property name.

The first part is to define a --clip-path property on every element with a value of initial. This prevents the property from being inherited every time it’s used, and since the * has zero specificity, any declaration that uses --clip-path can override it. Then you define all variations of the property name with var(--clip-path) as their value:

* {
	--clip-path: initial;
	-webkit-clip-path: var(--clip-path);
	clip-path: var(--clip-path);
}

Then, every time we need clip-path, we use --clip-path instead and it just works:

header {
	--clip-path: polygon(0% 0%, 100% 0%, 100% calc(100% - 2.5em), 0% 100%);
}

Even !important should work, because it affects the cascading of CSS variables. Furthermore, if for some reason you want to explicitly set -webkit-clip-path, you can do that too, again because * has zero specificity. The main downside to this is that it limits browser support to the intersection of the support for the feature you are using and support for CSS Variables. However, all browsers except Edge support CSS variables, and Edge is working on it. I can’t see any other downsides to it (except having to use a different property name obvs), but if you do, let me know in the comments!

I think there’s still a lot to be discovered about cool uses of CSS variables. I wonder if there exists a variation of this technique to produce custom longhands, e.g. breaking box-shadow into --box-shadow-x, --box-shadow-y etc, but I can’t think of anything yet. Can you? ;)


Markapp: A list of HTML libraries

1 min read 0 comments

Screen Shot 2016-08-26 at 17.09.24I have often lamented how many JavaScript developers don’t realize that a large percentage of HTML & CSS authors are not comfortable writing JS, and struggle to use their libraries.

To encourage libraries with HTML APIs, i.e. libraries that can be used without writing a line of JS, I made a website to list and promote them: markapp.io. The list is currently quite short, so I’m counting on you to expand it. Seen any libraries with good HTML APIs? Add them!


Introducing Multirange: A tiny polyfill for HTML5.1 two-handle sliders

2 min read 0 comments

multirangeAs part of my preparation for my talk at CSSDay HTML Special, I was perusing the most recent HTML specs (WHATWG Living Standard, W3C HTML 5.1) to see what undiscovered gems lay there. It turns out that HTML sliders have a lot of cool features specced that aren’t very well implemented:

  • Ticks that snap via the list attribute and the <datalist> element. This is fairly decently implemented, except labelled ticks, which is not supported anywhere.
  • Vertical sliders when height > width, implemented nowhere (instead, browsers employ proprietary ways for making sliders vertical: An orient=vertical attribute in Gecko, -webkit-appearance: slider-vertical; in WebKit/Blink and writing-mode: bt-lr; in IE/Edge). Good ol’ rotate transforms work too, but have the usual problems, such as layout not being affected by the transform.
  • Two-handle sliders for ranges, via the multiple attribute.

I made a quick testcase for all three, and to my disappointment (but not to my surprise), support was extremely poor. I was most excited about the last one, since I’ve been wanting range sliders in HTML for a long time. Sadly, there are no implementations. But hey, what if I could create a polyfill by cleverly overlaying two sliders? Would it be possible? I started experimenting in JSBin last night, just for the lolz, then soon realized this could actually work and started a GitHub repo. Since CSS variables are now supported almost everywhere, I’ve had a lot of fun using them. Sure, I could get broader support without them, but the code is much simpler, more elegant and customizable now. I also originally started with a Bliss dependency, but realized it wasn’t worth it for such a tiny script.

So, enjoy, and contribute!

Multirange


My positive experience as a woman in tech

7 min read 0 comments

Women speaking up about the sexism they have experienced in tech is great for raising awareness about the issues. However, when no positive stories get out, the overall picture painted is bleak, which could scare even more women away.

Lucky for me, I fell in love with programming a decade before I even heard there is a sexism problem in tech. Had I read about it before, I might have decided to go for some other profession. Who wants to be fighting an uphill battle all her life?

Thankfully, my experience has been quite different. Being in this industry has brought me nothing but happiness. Yes, there are several women who have had terrible experiences, and I’m in no way discounting them. They may even be the majority, though I am not aware of any statistics. However, there is also the other side. Those of us who have had incredibly positive experiences, and have always been treated with nothing but respect. That side’s stories need to be heard too, not silenced out of fear that we will become complacent and stop trying for more equality. Stories like mine should become the norm, not the exception.

I’ve had a number of different roles in tech over the course of my life. I’ve been a student, a speaker & author, I’ve worked at W3C, I’ve started & maintain several successful open source projects and I’m currently dabbling in Computer Science research. In none of these roles did I ever feel I was unfairly treated due to my gender. That is not because I’m oblivious to sexism. I tend to be very sensitive to seeing it, and I often notice even the smallest acts of sexism (“death by a thousand paper cuts”). I see a lot of sexism in society overall. However, inside this industry, my gender never seemed to matter much, except perhaps in positive ways.

On my open source repos, I have several contributors, the overwhelming majority of which, is male. I’ve never felt less respected due to my gender. I’ve never felt that my work was taken less seriously than male OSS developers. I’ve never felt my contributors would not listen to me. I’ve never felt my work was unfairly scrutinized. Even when I didn’t know something, or introduced a horrible bug, I’ve never been insulted or berated. The community has been nothing but friendly, helpful and respectful. If anything, I’ve sometimes wondered if my gender is the reason I hardly ever get any shit!

On stage, I’ve never gotten any negative reactions. My talks always get excellent reviews, which have nothing to do with me being female. There is sometimes the odd complimentary tweet about my looks, but that’s not only exceedingly rare, but also always combined with a compliment about the actual talk content. My gender only affected my internal motivation: I often felt I had to be good, otherwise I would be painting all female tech speakers in a negative light. But other people are not at fault for my own stereotype threat.

My book, CSS Secrets, has been as successful as an advanced CSS book could possibly aspire to be and got to an average of 5 stars on Amazon only a few months after its release. It’s steadily the 5th bestseller on CSS and was No 1 for a while shortly after publication. My gender did not seem to negatively affect any of that, even though there’s a picture of me in the french flap so there are no doubts about me being female (as if the name Lea wasn’t enough of a hint).

As a student, I’ve never felt unfairly treated due to my gender by any of my professors, even the ones in Greece, a country that is not particularly famous for its gender equal society, to put it mildly.

As a new researcher, I have no experience with publishing papers yet, so I cannot share any experiences on that. However, I’ve been treated with nothing but respect by both my advisor and colleagues. My opinion is always heard and valued and even when people don’t agree, I can debate it as long and as intensely as I want, without being seen as aggressive or “bossy”.

I’ve worked at W3C and still participate as an Invited Expert in the CSS Working Group. In neither of these roles did my gender seem to matter in any way. I’ve always felt that my expertise and skillset were valued and my opinions heard. In fact, the most well-respected member of the CSS WG, is the only other woman in it: fantasai.

Lastly, In all my years as a working professional, I’ve always negotiated any kind of remuneration, often hard. I’ve never lost an opportunity because of it, or been treated with negativity afterwards.

On the flip side, sexism today is rarely overt. Given that hardly anybody over ten will flat out admit they think women are inferior (even to themselves), it’s often hard to tell when a certain behavior stems from sexist beliefs. If someone is a douchebag to you, are they doing it because you’re a woman, or because they’re douchebags? If someone is criticizing your work, are they doing it because they genuinely found something to criticize or because they’re negatively predisposed due to your gender? It’s impossible to know, especially since they don’t know either! If you confront them on their sexism, they will deny all of it, and truly believe it. It takes a lot of introspection to see one’s internalized stereotypes. Therefore, a lot of the time, you cannot be sure if you have experienced sexist behavior, and there is no way to find out for sure, since the perpetrator doesn’t know either. There are many false positives and false negatives there.

Perhaps I don’t feel I have experienced much sexism because I prefer to err on the side of false negatives. Paraphrasing Blackstone, I would rather not call out sexist behavior ten times, than wrongly accuse someone of it once. It might also have to do with my personality: I’m generally confident and can be very assertive. When somebody is being a jerk to me, I will not curl in a ball and question my life choices, I will reply to them in the same tone. However, those two alone cannot make the difference between a pit rampant with sexism and an egalitarian paradise. I think a lot of it is that we have genuinely made progress, and we should celebrate it with more women coming out with their positive experiences (it cannot just be me, right?).

Ironically, one of the very few times I have experienced any sexism in the industry was when a dude was trying to be nice to me. I was in a speaker room at a conference in Las Vegas, frantically working on my slides, not participating in any of the conversations around me. At some point, one of the guys said “fuck” in a conversation, then turned and apologized to me. Irritated about the sudden interruption, I lifted my head and looked around. I noticed for the first time that day that I was the only woman in the room. His effort to be courteous made me feel that I was different, the odd one out, the one we must be careful around and treat like a fragile flower. To this day, I regret being too startled to reply “Eh, I don’t give a fuck”.


Introducing Bliss: A 3KB library for happier Vanilla JS

6 min read 0 comments

Screen Shot 2015-12-04 at 16.59.39Anyone who follows this blog, my twitter, or my work probably is aware that I’m not a huge fan of big libraries. I think wrapper objects are messy, and big libraries are overkill for smaller projects. On large projects, one uses frameworks like React or Angular anyway, not libraries.

Anyone who writes Vanilla JS on a daily basis probably is aware that it can sometimes be, ahem, somewhat unpleasant to work with. Sure, the situation is orders of magnitude better than it was when I started. Back then, IE6 was the dominant browser and you needed a helper function to even add event listeners to an element (remember element.attachEvent?) or to get elements by a class!

jasset-datepicker

Fun fact: I learned JavaScript back then by writing my own library, called jAsset. I had not heard of jQuery when I started it in 2007, so I had even coded my own selector engine! (Anyone remember slickspeed?) jAssset had plenty of nice helper functions, its own UI library and a cool logo. I had even started to make a website for its UI components, seen on the right.

Sadly, jAsset died the sad inevitable death of all unreleased projects: Without external feedback, I had nobody to hold me back from adding to its API every time I personally needed a helper function. And adding, and adding, and adding… Until it became 5000+ loc long and its benefit of being lightweight or comprehensible had completely vanished. It collapsed under its own weight before it even saw the light of day. I abandoned it and went through a few years of using jQuery as my preferred helper library. Eventually, my distaste for wrapper objects, the constantly improving browser support for new APIs that made Vanilla JS more palatable, and the decline of overly conspicuous browser bugs led me to give it up.

It was refreshing, and educational, but soon I came to realize that while Vanilla JS is orders of magnitude better than it was when I started, certain APIs are still quite unwieldy, which can be annoying if you use them often. For example, the Vanilla JS for creating an element, with other elements inside it, events and inline styles is so commonly needed, but also so verbose and WET, it can make one suicidal.

However, Vanilla JS does not mean “use no abstractions”. Programming is all about abstractions! The Vanilla JS movement, is about favoring speed, smaller abstractions and understanding of the Web Platform, over big libraries that we treat as a black box. It’s about using libraries to save time, not to skip learning.

So, I used my own tiny helpers, on every project. They were small and easy to understand, instead of several KB of code aiming to fix browser bugs I will likely never encounter and let me create complex nested DOM structures with a single JSON-like object. Over time, their API solidified and improved. On larger projects it was a separate file which I had tentatively codenamed Utopia (due to the lack of browser bug fixes and optimistic use of modern APIs). On smaller ones just a few helper methods (I could not live without at least my tiny 2 sloc $() and $$() helpers!). Here is a sample from my open source repos:

Notice any recurring themes there? :)

I never mentioned Utopia.js anywhere, besides silently including it in my projects, so it went largely unnoticed. Sometimes people would look at it, ask me to release it, I’d promise them I would and then nothing. A few years ago, someone noticed it, liked it and documented it a bit (site is down now it seems). However, it was largely my little secret, hidden in public view.

For the past half year, I’ve been working hard on my research project at MIT. It’s pretty awesome and is aimed at helping people who know HTML/ CSS but not JS, achieve more with Web technologies (and that’s all I can say for now). It’s also written in JS, so I used Utopia as a helper library, naturally. Utopia evolved even more with this project, got renamed to Bliss and got chainability via my idea about extending DOM prototypes without collisions (can be disabled and the property name is customizable).

All this worked fine while I was the only person working on the project. Thankfully, I might get some help soon, and it might be rather inexperienced (the academia equivalent of interns). Help is very welcome, but it did raise the question: How will these people, who likely only know jQuery, work on the project? [1]

The answer was that the time has come to polish, document and release Bliss to the world. My plan was to spend a weekend documenting it, but it ended up being a little over a week on and off, when procrastinating from other tasks I had to do. However, I’m very proud of the resulting docs, so much that I gifted myself a domain for it. They are fairly extensive (though some functions still need work) and has two things I always missed in other API docs:

  • Recommendations about what Vanilla JS to use instead when appropriate, instead of guiding people into using library methods even when Vanilla JS would have been perfectly sufficient.
  • A “Show Implementation” button showing the implementation, so you can both learn, and judge whether it’s needed or not, instead of assuming that you should use it over Vanilla JS because it has magic pixie dust. This way, the docs also serve as a source viewer!

So, enjoy Bliss. The helper library for people who don’t like helper libraries. :) In a way, it feels that a journey of 8 years, finally ends today. I hope the result makes you blissful too.

blissfuljs.com

Oh, and don’t forget to follow @blissfuljs on twitter!

[1]: Academia is often a little behind tech-wise, so everyone uses jQuery here — hardly any exceptions. Even though browser support doesn’t usually even matter to research projects!