Sunday, February 3, 2019

Picking a Language for Introductory CS - Why I don't like Python

Introduction

The purpose of this blog post is to explore issues related to the selection of a first programming language for CS majors. I originally started it with the intention of raising questions related to the rapid adoption of Python that is currently happening in CS departments across the US. However, I decided to make it more general to "score" a variety of different languages. I will generally restrict my comments to languages in the top 15 on RedMonk that I know people have used for introductory programming courses or which I can easily imagine people using for that purpose.

Why It Matters

There are many factors that go into why faculty and departments pick a language for CS1. Different departments and educators will weigh them all differently. The thing that I feel really is important to get out of the way is that we don't teach languages for the sake of languages. Instead, languages are a means to tell computers what we want them to do and, in that way, they are a vehicle for us to teach students the fundamental concepts that we really care about. Those concepts tend to carry across languages and time. Languages and technologies are constantly being created/updated and old ones tend to decline in usage as others replace them. These days, software development is a polyglot activity. No single language dominates the development landscape. Even the most popular of languages appear in well under 50% of the job posting for developers, so there isn't some obvious pick that is 100% essential for students to know. Contrast this with version-control systems where git is currently the clear winner. The decision of what language to use is more challenging.

So what factors should be considered when picking a language for CS1 instruction? My short list for this post is the following:
  • Ability to cover important concepts.
  • Enforcing good practices so students don't develop bad habits.
  • Low overhead for small programs.
  • Ability to solve reasonable problems without introducing too much complexity.
  • Early error reporting.
  • Make it easy to learn other languages.
  • Ability to write things that are interesting in the first semester.
  • Bonus points: be open, multi-platform, and preferably open source.
One thing that I have to point out here is that I'm specifically considering CS1 with the intent that students in CS1 intend to go on to gain a deeper knowledge of programming, software development, and computer science in general. I'm not as interested in the introduction to programming courses that are terminal or specialized for some particular application. Those courses can have very different learning outcomes that vary greatly based on the intended audience. Since I also think there can be advantages to teaching CS1 and CS2 in the same language, I'll occasionally mention things that would more likely be relevant in CS2 than CS1, but for the most part, I will focus on CS1.

Covering Key Concepts

It is critical to remember that in CS, especially the early classes, what we really want to teach are concepts. Languages are vehicles for that teaching, not the end goal. We aren't trying to teach students about Java, Python, or C++, we are trying to teach them about how to structure logic in formal grammars to solve problems in ways that experience has shown will be scalable and maintainable. Granted, CS1 isn't going to get to the point where scaling to large code bases and maintaining software in teams for years matters, but we are laying the foundation for that.

There are lots of concepts that any reasonable programming language will do a good job teaching. Things like functions, conditional execution, and iteration are found across all languages of interest. However, there are some other concepts that I think are important to teach early that not all languages are equally successful at.

Types

The notion of types is significant in all programming languages, but some languages bring it to the foreground more than others to force students to be aware of it. For the most part, this is a difference between statically typed languages and dynamically typed languages. Statically typed languages make students aware of types right of the bat. Dynamically typed languages require a little coaxing. In addition, topics like subtyping and parametric polymorphism can't really be explained or introduced at all in dynamically typed languages. Depending on when OO concepts are covered, this could be a significant omission.

The difference can manifest in subtle ways too. Consider the difference in the REPL experience between dynamically typed languages like Python and statically typed languages like Scala. In the Python REPL, executing the expression 4+5 and the statement print(4+5) produce exactly the same result. In the Scala REPL, they produce very different results, in large part because the Scala REPL shows the types of expressions. So the response to 4+5 is something like res0: Int = 9. This isn't just based on the typing system in the language, it relates to deeper decisions for those who implement the REPLs. For example, the JavaScript console in Google Chrome produces slightly different output for evaluating an expression and printing, but it doesn't show types by default. The creators of the Java jshell utility also made the decision to not show types, despite being a statically typed language, and jshell also has very different responses to the two commands. The Python REPL is particularly poor here in the decision to not distinguish at all between the two. I say this is particularly poor because I know from years of experience that one of the things students often struggle with is the difference between printing and returning values. It is useful to have a REPL that helps to make the distinction.

To make this completely clear, consider the following REPL session in Python 3.
>>> def foo1():
...     return 9
... 
>>> foo1()
9
>>> def foo2():
...     print(9)
... 
>>> foo2()
9

Now compare that to the equivalent Scala code.
scala> def foo1() = 9
foo1: ()Int

scala> foo1()
res0: Int = 9

scala> def foo2() = println(9)
foo2: ()Unit

scala> foo2
9
Which one of these do you think is going to help a student understand that there is a difference between returning a value and printing one?

Constants

This is one that I find to be significant and which many novices might not understand. Most languages have a way to express that a variable can't change. Part of me wants to say that C++ has the edge here because the use of const is so critical to good C++ programming, but C++ also lacks immutable collections, as do most imperative languages. In that regard, functional languages might be better, but pure functional languages don't let students see non-const values. In that regard, Scala gets high marks as a language where an instructor can really explore the role of mutation.

This is an area where Python completely falls down. Making a value constant in Python is way beyond what you can do with a novice. JavaScript was bad here too until ES6 introduced const.

Scope

Speaking of the additions to JavaScript in ES6, I also feel that scope is a significant concept for programming and one that can be challenging for novice programmers to understand. Most of the languages that are commonly used have block scope. The exceptions tend to be scripting languages that were created for writing only small programs and therefore have simplified systems like only having global scope and function scope. The ECMAScript committee realized that this really isn't a good thing for a language being used to develop larger programs, so they introduced let and const basically to replace var. Unfortunately, both Python and PHP still lack block scope. Python also has another very odd behavior for default arguments that is closely related to scoping that I will come back to later.

Private

If OO is done early, I would also argue that private is a really important concept to cover. Other visibility options are helpful, but making members private is a big part of what allows OO to simplify life in large code bases because you can hide away certain details and know that certain parts of state can only be modified in a small region of code. In practice, this matters less if everything is made immutable, but in CS1 the goal is teaching concepts and if a language doesn't have a construct for doing something, it makes it a lot harder to teach.


This is an area where JavaScript has been bad. One could argue that because it was prototype based, JavaScript has always been something of an odd duck in this regard and that teaching students JavaScript to start with was going to make moving to other languages more challenging in terms of OO. However, the newest additions to JavaScript make it feel more like other class-based OO languages, including the ability to declare private members. Python is still weak here as the only notion of private is by convention. I can imagine the student questions asking if something is so important why doesn't the language take it seriously.

Discourage Bad Habits

I firmly believe that the choice of a first programming language is critical for another key reason, it is hard to break bad habits and unlearn things. This rule applies to all things, not just programming. I recall my High School physics teacher telling me that about physics. I have also experienced it myself in the area of programming both from my own learning experience and from working with students. I learned to program in line numbered BASIC with GOTO and if as the primary control structures. Even after moving to Pascal and C, then C++ in college, the remnants of the style I adopted from BASIC were still present for many years.

Many readers will be familiar with the following quote from Edsger Dijkstra.
It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.
It takes a long time to break away from the habits instilled by learning to code in line-numbered BASIC. I saw this up close early in my teaching career when I had a student who had also learned how to code that way. I think it took two years to finally convince him that the style of programming he was used to was not a good one. He missed out on a lot of potential learning because he was stuck in that old style.

The thing is, this student could defend his style because he was good at that style and as a result, he could make it work. This is challenging to deal with and I certainly don't think that it is unique to this student.

Dynamic Typing

This is the area where I worry most about students picking up poor habits with modern programming languages. GOTO isn't a problem anymore as many newer languages don't support it, few examples students will stumble across on the web will use it, and because it is a language feature, students can't accidentally start using it without seeing an example. However, abusing types is something that students can easily fall into. Indeed, some languages have major libraries that basically tell students that this is okay.

The simplest example of this is probably heterogeneous collections. In dynamically typed languages, if a student has an assignment where they have to store multiple names and ages, they could easily set up something like ["Pat", 34, "Bob", 40, "Betty", 25]. Even index elements are names and the following odd index is the age. They can easily make this work, and because they can make it work, it can be very hard to convince them that this style of programming by convention leads to brittle code that is hard to maintain and is likely to break.

The challenge of convincing students that they should do the right thing with types becomes especially challenging if they see examples of what they are doing in libraries that they use in class. One behavior that I find particularly disturbing is when functions in dynamic languages return different types based on the values of the inputs. Unfortunately, this isn't all that uncommon in dynamically typed languages, even though it produces code that is nearly impossible to maintain. One example of this is the pandas.DataFrame.xs method in the Python Pandas library, which is used extensively for data science. Statically typed languages tend to make it very clear that this type of behavior is problematic by making it painful to do if it can be done at all. As a result, students are unlikely to do it by accident and then find ways to "make it work."

Cute Tricks

Another habit that I don't want to encourage in novice programmers is "cute trick" types of solutions. These are ways of writing things that tend to produce code that can be hard to read and maintain. One example of this that jumps out to me is using short-circuit logic operators for things other than logic in languages that have "truthy" and "falsey" values instead of actual Booleans.

For example, doing something like result = funcThatIsFalseyOnFailure() || defaultValue instead of using a conditional expression. This type of code hides what is really going on and uses the or operator to do something that clearly isn't just Boolean logic.

But I can teach it the right way...

I feel that I can already hear some readers saying that they don't need languages to help enforce good style because they will teach students to the do things the right way, and they can enforce approaches that avoid the bad habits. Unfortunately, I think that this ignores how much our students learn from sources other than the classroom. Personally, I think it is good to have students learn from other sources. Our real goal should be to set them up for life-long learning. Learning from other students and other resources should probably be viewed as a positive, but we have to acknowledge that when that happens, it is outside of our control. As such, students are going to see and probably adopt whatever a language allows, even if we don't tell them about it. The more popular the language, the more likely this is to happen.

Low Overhead for Small Programs

I've seen some debate about this, but personally, I like languages used in introductory programming classes to have low overhead for small programs. This is an area where scripting languages like Python and JavaScript shine while more traditional languages fall down. Java is particularly horrible in this regard as the traditional "Hello World" program involves a large number of keywords that you can't possibly explain to students on the first day of class. I would argue that this is why teaching environments like BlueJ and Greenfoot were created for Java. Note that I love Greenfoot and have used it for teaching non-majors, but I'd really prefer to just use a language where "Hello World" reads something like println("Hello World."). The only aspect of that which needs explaining is the double quotes for making string literals, and pretty much every programming language is going to have something like that.

I'm also fond of REPLs (Read, Evaluate, Print Loop). Here again, scripting languages shine, though even Java now has jshell. The first day I talk about a language, I like nothing better than to drop the students into a REPL and let them play around. As I mentioned above though, not all REPL experiences are created equal in terms of teaching, some provide more information to help students understand what is going on than others.

I also have to point out that scripting languages aren't the only languages that score well here. Functional languages, both statically typed and dynamically typed varieties, also tend to have both REPLs and support a style that is closer to scripting languages than to languages like Java and C++.

Hold Complexity at Bay

Limiting complexity isn't just about the first day and "Hello World". Every practical language is going to have advanced methods and dark corners. Languages that are good for CS1 don't force the instructor to introduce these before they want. The complexities that I refer to here come in a lot of forms and vary by language. Some are "concessions" that the language creators made early on to make a language more practical. Others are advanced features that benefit professional development, but only complicate life for developers. Some are just elements inherent to the programming model for that language.

I would argue that this is another area where Java falls down. One simple example fits in with the last point, every Java application requires at least one usage of the static keyword, but static is really challenging to explain to novice programmers. A slightly more subtle example is the distinction between primitives and object types in Java. The motivation for this in Java is simple, when Java was created, they needed to have primitives to make the language sufficiently performant, so they sacrificed some of the purity of the object model to put in primitive types and enable programs written in Java to run faster. They had learned from Smalltalk that making every value an object was too big of a drag on performance and would seriously hamper adoption. (I have to wonder if this might also have been a factor in the late adoption of Python, which is slightly older than Java, but didn't see broad adoption until many years after Java had become a dominant language and computers had gotten much faster.)

To understand what I mean about the problem with primitives, consider the issue of converting doubles and Strings to ints in Java. These are things that will inevitably come up in CS1. If you aren't doing this in CS1 it's probably because you have either intentionally or subconsciously created examples that avoid this. If you have double x and you need an int, you would use a C-style cast like (int)x to do the conversion. However, if you have String s and you want an int, you need to use Integer.parseInt(s) to do the conversion.

Later in the semester you probably teach students to use Java collections. Then you get to explain why they can't declare a List<int> and instead have to do List<Integer>. As an instructor, you just have to pray that no student ever tries to create a List<String>[], or an array of any generic type so you don't have to explain why they can't do that.

The thing is, these are all things that are going to come up in CS1 naturally, and they are going to slow you down if you are an instructor. They also aren't anything more than artifacts of the early history of the language. You can see this in how Scala, which is also a statically typed language that compiles to the JVM, handles these same situations. The conversions to integers are done with x.toInt and s.toInt. For some reason, I never have any trouble explaining that to students. If a student chooses to make an Array[List[Int]] there isn't a problem either. Interestingly, Scala can do this without sacrificing speed either. The reason is that Scala's compiler is smarter and it compiles thing to primitives whenever it can, but that is an implementation detail that never comes up in CS1 or CS2 and would only matter if you get to the point where you are trying to do high-performance computing.

I feel that C++ also falls down in the area of exposing complexities early, but mostly in the area of the memory model and memory handling. You really can't push C++ all that far before you are going to have to start discussing references, pointers, and memory handling. These are all things that students really need to learn and understand before they graduate, but personally, I don't want to have them take time out of my CS1 course.

Early Error Reporting

One of the fundamental rules in education is that early feedback is good. It is true for software development as well, whether you are talking about finding out if a change needs to be made to the product you are building or just an error in the code. The earlier you find out about things, the less it costs to deal with it. I believe that the same applies to how students find out about errors. Students benefit from error messages that they get early as opposed to finding out that something is wrong later on.

One of the things I like to discuss in CS1 is different types of errors. I tell them about the hierarchy of syntax, runtime, and logic errors. I point out how syntax errors are generally preferred because they get nice error messages telling them what and where the problem is while logic errors are the worst because they get very little assistance in finding those errors. As a developer, I really prefer languages and programming styles that turn as many errors as possible into syntax errors and produce as few logic errors as possible.

This is another place where dynamically typed languages, like JavaScript and Python, score poorly. Both produce very few syntax errors. Most of the errors that students are going to see will be runtime or logic errors that won't pop up until they run the code and use inputs that cause the offending path to be executed. JavaScript is particularly bad about trying to get things to work so you don't even get many runtime errors and lots of mistakes become logic errors.

In professional environments, developers deal with this by building up a suite of unit tests. While testing is something that I talk about in CS1, if I have to cover testable code and a unit testing framework to help my students do proper development in a language, then the lack of syntax errors becomes something that introduces extra unnecessary complexity early.

Ease of Learning Other Languages

Back in the mid-1990s, before Java had gained momentum, the question of what language a fresh grad was going to use in their first job was as simple as C or C++. At the time, C++ hadn't diverged nearly as much from C as it had today, so a department could easily give students a deep coverage of all the languages they were likely to use in their first job. Even with the rise of Java, a 4-year degree could be structured to introduce students to every language in significant usage.

When I look at the most recent RedMonk language ranking I feel like my students could easily use almost any of the top 20 languages in their first job. Indeed, I have graduates working with languages well below the top 20 in their jobs. When it comes to programming, we now live in a polyglot world. We shouldn't even be trying to teach students every language they might use in their first few jobs because there are too many possibilities. Instead, we need to strive to lay a foundation where students learn a diverse set of languages that will enable them to pick up new languages quickly. Personally, I think it would be helpful if the language they learn in CS1 makes it easier for them to pick up the other languages they are going to exposed to in your program instead of harder. If the instructors in later courses are able to draw parallels between languages features in the language used in CS1 and the next few languages students learn, this can definitely facilitate learning.

So what are the properties of a language that can help students move to other languages? As far as I know, there haven't been any rigorous studies done on this topic. While that is unfortunate, I completely understand why it is the case. Large-scale educational experiments are challenging to do well and are influenced by a lot of different variables. Creating a study where language feature influence was larger than the noise would be very challenging. As such, I'm just going to throw out some ideas.

I feel that a language that leads well into other languages is one that isn't too odd in either syntax or semantics so that students can move to a variety of other languages and paradigms without being surprised by the new language. Let's start with the easy part of this, use a language that isn't too syntactically different from the ones they will learn next. Most people would likely agree that you shouldn't start in APL or J. I see this as an argument against Scheme/Racket as well. (I have to point out that every language will have strikes against it, and what matters is the total weight of pros and cons. For me, the dynamic typing of Scheme/Racket is a bigger killer than the syntax for the reasons listed above, but everyone places different weights on these things and there are certainly some big positives for Scheme/Racket.)

I'd even go further and argue that if a language has too many control structures that aren't seen in other places, that could be a problem as well. For example, Python supports an else clause on loops that you won't see in other languages. The use of Boolean operators as shortcuts that I mentioned above could be seen as a fail here as well.

On the semantic side, I think that scope is again significant. I have been told by a TA from Rice University that when students get to Java in their third semester, after having two semesters of Python, that many find block scope to be confusing. Outside of Python, PHP, and Ruby, block scope is the standard rule, and given that scope is something beginning students struggle with, having a first language that handles it differently than most others seems problematic to me for future growth. Python also has very unusual handling of default parameters to functions. Consider the following code.

>>> def foo(a, b=[]):
...     b.append(a)
...     return b
... 
>>> foo(3)
[3]
>>> foo(4)
[3, 4]

While the scope of b is only inside of foo, its memory is allocated at the same level as the declaration of foo and it is remembered from one invocation to the next. I have yet to find another language that has this semantics for default values to functions.

Again on the semantic side, languages that have very different models of OO from the "norm" is also a red flag for me. JavaScript recently gained classes, but the OO model in JavaScript is still prototype based and very different from every other language in significant use today. Python also has odd elements to its OO model, like the need to list self as the first argument for methods. I can see how some might see this as a nice advantage for teaching showing how the object is available, but when the student moves to another language, it is yet one more thing to handle differently.

Lastly, I feel very strongly that before graduating, students should be exposed to multiple paradigms of programming. If a major never covers anything other than the OO-Imperative style of Java and C++, they will find it much harder to pick up other languages later that either aren't object-oriented or that focus on functional instead of imperative. For this reason, I actually like the idea of multi-paradigm languages for CS1 and CS2. Of course, one has to keep in mind that multi-paradigm can break a lot of the other things we are looking for if it makes things more complex.

Interesting First Semester Assignments

One of the reasons that Java took off in the education space was the relative ease with which students would write graphical code, especially using Applets. The Applets have long since died, but the ability to put interesting assignments into CS1 and CS2 is still very strong. Interesting here can mean many things and goes well beyond graphics today to include things like playing with data, robotics, or socially relevant problems. Regardless of the details, a general requirement is that you have strong library support, and hopefully ways of bringing in those libraries that doesn't overly complicate things. This is an area where JVM languages and Python excel. Really low-level languages, like C, tend to fall down the most here because the amount of code that is required to do anything interesting is often quite large.

Other Factors

There are a number of other factors that impact my thoughts on picking a language for introductory programming. I personally also appreciate languages that run on multiple platforms and are hopefully open source. Those languages generally are better for allowing students to work on their own machines, regardless of their OS, and have tooling that is free of cost. I also really like a language that works well for both CS1 and CS2. I discuss that in more depth in an earlier post.

I also like to use a language and accompanying tools that are used professionally. It doesn't have to be a top 5 language. I'm fine with top 20 because the reality is that no single language dominates today, so whatever language you pick, the ability to learn other languages is more significant than knowing a specific language. The thing is, I'm not all that big on teaching languages and teaching environments in courses for CS majors. The reason is pretty simple, I only have students for 4 years and a limited number of hours and since I want to use the same language for CS1 and CS2, I don't feel comfortable spending that much time teaching something that I know isn't used anywhere outside of the classroom.

We also have to realize that the languages used in industy aren't fixed and what we choose to teach in colleges has an impact on them. One of the things people consider when they pick the language for a new project is whether they can hire enough developers of sufficient talent who know it. The more colleges that teach a language, the more likely it is to be adopted in industry. I'm quite certain that the broad adoption of Java in colleges played a major factor in its dominance in industry. Similarly, the current growth in Python is inevitably fuelling its professional adoption. So when you pick a language to teach, you might ask yourself, "Is this the language I want the software that runs my life to be written in?" If you don't think you want your bank or the elevator you ride in using that language, perhaps you should pick a different one.

Why People Choose Python

The language that seems to be taking over early CS education right now is Python. That is actually what prompted me to write this post because I worry about this particular choice for introductory language. There is no doubt that Python has some real advantages in the fact that it has low overhead in CS1 and that there is broad library support to enable students to do interesting things. Unfortunately, it falls down in other areas. The ones I worry most about are an inability to cover certain topics that I consider significant and the possibility for students to pick up bad habits that the language doesn't prevent.

Perhaps the most standard argument that I hear for Python is that it is "simple" and easy for students to learn. While I completely agree that Python is an easy language for experienced programmers to pick up, it isn't at all clear that being easy for experienced programmers to learn is the same as being easy for beginning developers to learn. Learning to program isn't just about syntax. It is about learning a new way of thinking, how to structure logic, and the general semantics of that syntax. Having a language that enforces more structure and which gives early feedback when rules are broken could actually be very useful for novices. Some evidence for this was provided by Alzhrhani et al. (2018) who found that students in a large data sample struggle more with Python than they do with C++. Indeed, I would argue that moving from a language that provides more error checking to one that provides less error checking is generally easier than going the other way, and that more assistance from the language to write good code is especially beneficial for the novice. Having a language that helps the student to build their mental model of the semantics of programming could actually be more significant than having a simple syntax.

I feel compelled to mention that another common argument for Python is that it teaches indentation. There is definitely truth to this, but I can't say I find this very motivating, especially since Python lacks the block scope that the indented blocks indicate. The reality is that auto-formatting tools for languages without significant whitespace are well developed and companies with strict style guides can easily have them enforced by software. In contrast to this, automatically cleaning up poor type usage in programs written for dynamically typed languages is a far more complex problem.

In some ways, the true challenge of teaching introductory programming using Python is probably summed up well by terms frequently seen in discussions of Python programs. It doesn't take much reading on Python to come across the term "Pythonic". When I asked a Python programmer about functions that return different types based on the argument values, like xs on Pandas' Datasets, I was told that was un-Pythonic. The problem is that Pythonic and un-Pythonic are advanced concepts. When you are dealing with students who are still trying to understand the basics of conditionals, iterations, and functions, they simply aren't prepared to comprehend "Pythonic". Instead of having good coding done by convention, those students need a language that does more to enforce good style.

My Choice for CS1 and CS2

If you've gotten this far, you have already shown great perseverance in our current age of short attention spans. You might be wondering what language I would pick for teaching CS1 and CS2. I actually like Scala as the language for these courses. We've been using Scala in this role at Trinity University for almost 10 years now, and I have very few complaints. As I've said before in this post, no language is perfect, but I find that the pros for Scala definitely outweigh the cons. I have some older blog posts (here and here) where I discuss this in more detail as part of my thoughts from the first ~5 years of using Scala. I'll just list some of the highlights here.

  • REPL and Scripting interface for CS1.
  • Static type checking and lots of syntax errors instead of runtime errors or logic errors.
    • Type errors prevent a lot of issues.
    • CS1 students should never see a NullPointerException.
  • Syntax/semantics allow coverage of things I care about including const/mutability, block scope, OO, subtyping, parametric types, etc.
  • Access to the full JVM for libraries.
  • Expressive syntax that combined with libraries that allow me to give interesting assignments that don't take thousands of lines to code.
  • Solid APIs that include links to the types for arguments and return values.
    • In CS2 I can cover multithreading and networking.
  • I can cover CS1 and CS2 topics without the language forcing me to talk about things I'm not ready to cover yet.
  • Uniform OO syntax without primitives.
  • Fairly standard OO model.
  • Multiparadigm so students have a nice path to C++, Haskell, Java, etc.



Saturday, October 20, 2018

Scala Numerical Performance with Scala Native and Graal

This post is a follow-on to my earlier post looking at the performance of different approaches to writing an n-body simulation using Scala. That post focused on the style of Scala code used and how that impacted the performance. This post uses that same code, so you should refer to it to see what the different types of simulations are doing.


Comparing Runtimes

For someone who is interested in doing numerical simulations, Scala and the JVM might seem like odd choices, but the benefits of having a good language that is highly expressive and maintainable can be significant, even for numerical work. In addition to the impact of programming style and the optimizations done by the compiler that produces JVM bytecode, performance can also be significantly impacted by the choice of runtime environment.

Historically, there wasn't much in the way of choice. Sun, then later Oracle, made the only JVM that was in even reasonably broad usage, and enough effort was put into making the hotspot optimizer work that it was generally a good choice. Today, however, there are a few more options. If you are running Linux, odds are good that you have OpenJDK by default and would have to specifically download and install the Oracle version if you want it. In addition, Oracle has recently been working on Graal, a new virtual environment for both JVM and non-JVM languages. Part of the argument for Graal was that the old C2 hotspot compiler, written in C++, had simply because too brittle and it was hard to add new optimizations. Graal is being built fresh from the ground up using Java, and many new types of analysis are included in it. While I have seen benchmarks indicating the Graal, though young, is already a faster option for many Scala workloads, I wasn't certain if that would be the case for numerical work. This is at least in part due to the fact that one of the Graal talks this last summer at Scala Days mentioned that Graal was not yet emitting SIMD instructions for numerical computations.

In addition, the newest addition to the list of supported environments for Scala is Scala Native. This project uses LLVM to compile Scala source to native executables. One of the main motivators for this right now is using Scala for batch processing because native executables don't suffer from the startup times of bringing up the JVM. This project is still in beta, but I wanted to see if it might be able to produce executables with good numerical performance as well.

For these benchmarks, the Scala code was compiled with -opt:_ and run on each JVM with no additional options. I am using a different machine from my earlier post, which explains the significant runtime differences between this post and the earlier one using a similar JVM. The following table gives timing results for the five approaches using the five different runtimes.

EnvironmentStyleAverage Time [s]Stdev [s]
Oracle JDK 8-191Value Class0.3940.012
Mutable Class0.6830.015
Immutable Class0.8090.010
Functional 14.2460.439
Functional 21.7230.027
Oracle JDK 11Value Class0.3780.006
Mutable Class0.6900.012
Immutable Class0.9400.083
Functional 14.4200.059
Functional 21.5890.021
OpenJDK 10Value Class0.3880.008
Mutable Class0.7150.006
Immutable Class0.8920.013
Functional 14.4050.039
Functional 21.6890.013
GraalVM 1.0.0-rc7, Java 1.8Value Class0.3770.003
Mutable Class0.3960.003
Immutable Class0.6940.108
Functional 14.0540.151
Functional 20.7930.016
Scala Native 0.3.8Value Class2.6030.185
Mutable Class1.0280.020
Immutable Class2.5950.020
Functional 116.841.39
Functional 25.2320.655

Looking at the first three runtimes, we notice that there is very little difference between Oracle and OpenJDK over various Java versions from 8 to 11. In all three, the value class approach is fastest by far followed by the version with the mutable classes, then the immutable classes with functional approaches being slowest by a fair margin.

Things get more interesting when we look at Graal. The performance of the value class version is roughly the same as for the other JVMs, but every other version runs significantly faster under Graal than in the others. The ordering stays the same as to which approaches are fastest, but the magnitude of how much slower each version is than the value class version changes dramatically. Under Graal, the mutable class version is almost as fast as the value class version instead of being almost a factor of two slower. Most impressive is that the second functional version is only a factor of two slower than the value class version instead of being four times slower. This is significant for two reasons. One is that it is the version written in the most idiomatic Scala style. The other is that this version literally does twice as much work in terms of distance calculations as the other versions. That means that we really can't expect it to do any better than being 2x as slow. The fact that it takes nearly twice as long to run means that using Graal there isn't a significant overhead to the functional approach the way there is using the older JVMs.

At the end of the table, we have the results for Scala Native. Unfortunately, it is clear that Scala Native is not yet ready for running with performance-critical numerical code. One result that stands out is that the value class version is not the fastest. Indeed, it runs at a speed roughly equal to the immutable class and 2.5x slower than the mutable class. I assume that this means that the value class optimizations have not yet been implemented in Scala Native. As to why even the mutable class version is more than 2x slower than Graal and at least 50% slower than the other VMs is a bit puzzling to me as I did the timing using a release build. I expect that this is something that will improve over time. Scala Native is still in the very early stages, and there is a lot of room for the project to grow.


Comparison to C++

As before, I also ran a test comparing these Scala results to C++ code compiled with the GNU compiler using the -Ofast flag. This uses a simpler test with the value class technique. You can see in the table below that the Scala code is performing about 15% slower than C++ in all of the environments except Scala Native, which is several times slower. Given the results above indicating that Scala Native isn't nicely optimizing value classes yet, this result for Scala Native isn't surprising.

EnvironmentAverage Time [s]
g++3.29
Oracle JDK 8-1913.88
Oracle JDK 113.77
OpenJDK 103.82
GraalVM 1.0.0-rc73.82
Scala Native21.6


Conclusions

For me, there are two main takeaway messages from this. The first is that while Scala Native holds the longterm potential to give Scala a higher performance platform for running computationally intensive jobs, it isn't there yet. A lot more work needs to go into optimization to get it to reach its potential of competing with other natively compiled languages. I firmly believe that it can get there and is moving in that direction, but it isn't ready yet.

On the other hand, these results indicate to me that if you are programming Scala, you should strongly consider using Graal, even if you are doing numeric work. Based on presentations at Scala Days 2018 in New York, I know that this is the case for non-numeric codes, but at the time Graal wasn't emitting SIMD instructions, so it wasn't clear if it would compare well to the old C2 hot-spot optimizer. These results show that regardless of style, Graal is at least as performant as the other JVM options and that in some cases it is much faster. Perhaps most significantly, the functional 2 style, which is written in a much more idiomatic style for Scala, is more than 2x as fast with Graal was with the other JVMs. I should also note that Graal still allows me to run graphical applications like my SwifVis2 plotting package, so there isn't any loss of overall functionality.

Going forward, I want to test the performance of more complex n-body simulations using trees and also look at multithreaded performance to see how Graal compares for those. Scala Native is still only single threaded for pure Scala code, so it will likely be left out of those tests.

GraalVM native images are another feature that I would really like to explore, but there are some challenges in building them from Scala code that I did take the time to overcome for this post.

Monday, August 6, 2018

Why is JavaFX so slow? SwiftVis2 and Speed Problems with JavaFX 2D Canvas

For the last year or so I've been working on a plotting package called SwiftVis2. There are a number of goals to this project, but a big one is to provide a solid plotting package for Scala that can be used with Spark projects in Scala and which can draw plots with a large number of points. My own personal use case for this is plotting ring simulations often involving millions of particles, each of which is a point in a scatter chart. The figure below shows an example of this made with SwiftVis2 using the Swing/Java2D renderer with 8.9 million particles.

SwiftVis2 plot of ring simulation using Swing renderer.

This particular use case pretty much precludes a lot of browser-based plotting libraries that seem to be popular these days as converting 8.9 million data points to JSON for plotting in JavaScript simply isn't a feasible thing to do for both memory and speed reasons.

As the name SwiftVis2 implies, there is an earlier SwiftVis plotting program. It is a GUI based program written in Java. One of the things that excited me about the upgrade was the ability to use JavaFX instead of Java2D. My understanding is that one of the main reasons for building JavaFX new from the ground up was to take better advantage of graphics cards, and I was really hoping that a JavaFX based rendering engine would outperform one based on the older Java2D library.

I didn't really test this until I was writing up a paper on SwiftVis2 for CSCE'18 and I did performance tests against some other plotting packages. In particular, I compared to Breeze-Viz, which is a Scala wrapper for JFreeChart. JFreeChart uses Swing and Java2D, so I was really hoping that SwiftVis2 would be faster. At the time, I only had a renderer that used JavaFX in SwiftVis2, and I was really disappointed when Breeze-Viz turned out to run roughly twice as fast on a plot like the one above. Tests of NSLP, a different Scala plotting package using Swing/Java2D, showed that it also ran at roughly twice the speed of SwiftVis2 using JavaFX.

Some searching on the internet showed me that there were known issues with drawing too many elements at once on a JavaFX canvas because of the queuing. So I enhanced my renderer to batch the drawing. This has a nice side effect that users can see the plot draw incrementally, so they know their program isn't frozen, but it didn't help the overall speed at all.

Since SwifVis2 was written to allow multiple types of renderers, I went ahead and wrote a Swing renderer that uses Java2D, just to see what the performance was like. The results, shown in the following table, were pretty astounding to me. Note that these times were for drawing plots like the one above. It is also worth noting that upgrading my graphics driver improves the performance for JavaFX more than it did for the Swing based libraries, but even with new drivers, JavaFX is still slower.


PackageRender Time for 8.9 million Points
Breeze-Viz80 secs
SwiftVis2 with JavaFX108 secs
SwiftVis2 with Swing/Java2D13 secs

Keep in mind here that the two SwiftVis2 options are running the exact same code for everything except the final drawing as the only difference is which subclass of my Renderer trait is being used. While I do feel a certain amount of happiness in the fact that SwiftVis2 using Swing is significantly faster than the Breeze-Viz wrapper for JFreeChart, I'm still astounded that JavaFX is nearly 10x slower than Swing/Java2D.

Not only is JavaFX slower, it does an inferior job of antialiasing when drawing circles that are sub-pixel in size. The following figure shows the plot created using the same data and the JavaFX renderer. The higher saturation is obvious. The rendering with Swing/Java2D is the more accurate of the two.

Ring simulation plot made using the JavaFX renderer. Note that the points are sub-pixel and this renderer over-saturates the drawing.
To me, this seems like a serious failing on the part of JavaFX. JavaFX is actually harder to use than Swing, and I always forgave that based on the assumption that special handling was needed to work with the GPU, but that the tradeoff would be significantly improved performance. I can only hope that whatever ails JavaFX in this regard can be fixed and that at some point in the future, JavaFX rendering can match the performance promise that comes with completely rebuilding a GUI/graphics library from the ground up.

Also, in case anyone is wondering why I'm bothering to create yet another plotting package, I will note that SwiftVis2 looks significantly better than JFreeChart, especially in the default behavior. For comparison, the figure below shows the output of the most basic invocation of Breeze-Viz for this plot, which is comparable to the above plots for SwiftVis2. Even if the range on the y-axis is adjusted, it still generally doesn't look as good as the SwiftVis2 output, especially in regards to axis labels. This is the reason I never really got into using JFreeChart in the past. SwiftVis2 is still in the early stages of development, but it already does a better job with my primary use cases.

This is the same figure as those above made with default settings using Breeze-Viz instead of SwiftVis2.
You can find the code for my basic performance tests on GitHub. The data file is not on GitHub because it is rather large. You can find a copy of it on my web space.

Sunday, June 24, 2018

Supporting Scala Adoption in Colleges

Scala Days North America just finished, and one of the big surprises for me was how many talks focused on teaching Scala. The first day had roughly one talk on teaching Scala in every time block.


The topic of increasing the use of Scala in colleges also came up in the final panel (Presentation Video).

I think it is pretty clear that the companies using Scala feel a need to get more developers who either know the language or can transition into it quickly and easily. Large companies, like Twitter, can run their own training programs, but that is out of reach for most companies. What most companies need is for more colleges to at least introduce Scala and the idea of functional programming in their curricula.

The Current State of Affairs

The current reality of college CS education is that imperative OO is king. For roughly two decades, Java dominated the landscape of college CS courses, especially at the introductory level. In the last ~5 years, Python has made significant inroads, again, especially at the introductory level. Let's be honest, that is really a move in the wrong direction for those who want graduates that are prepared to work with Scala because not only is Python not generally taught in a functional way, the dynamic typing doesn't prepare students to think in the type-oriented manner that is ideal for Scala programming. (I have to note that this move to Python has been predicated on the idea that Python is a simpler language to learn, but an analysis of a large dataset of student exercise solutions for zyBooks indicates this might not actually be the case.)

It is also true that some aspects of functional programming, namely the use of higher-order functions, has moved into pretty much all major programming languages. However, it isn't at all clear that these concepts are currently appearing in college curricula. Part of the reason for this is that in nearly all cases, features added to languages late are more complex than those that were part of the original development. Yes, Java 8 has lambdas and you can use map and filter on Streams, but there is a lot of overhead, both syntactically and cognitively, that makes it much harder to teach to students in Java than it is in Scala. That overhead means professors are less likely to "get to it" during any particular course.

The bottom line is that the vast majority of students coming out of colleges today have never heard of Scala, nor have they ever been taught to think about problems in a functional way.

What Can You Do?

If you work for a company that uses Scala, it would probably benefit you greatly if this state of affairs could be changed so that you can hire kids straight out of college who are ready to move into your development environment. The question is, how do you do this? I have a few suggestions.

First, contact the CS departments at local colleges and/or your alma mater. Tell them the importance of Scala and functional programming to your organization and that you would love to hire graduates with certain skills. This has to be done in the right way though, so let's break it down a bit.

Who Should You Talk To?

My guess is that a lot of people will immediately wonder who they should be reaching out to, but I'm pretty sure it doesn't matter, as long as it is a faculty member in Computer Science. You can always start with the chair and ask them if there is someone else in the department that would be better to talk to, but no matter who you start with, you'll probably wind up being directed to the most applicable person.

What Types Of Schools?

I know that it will be tempting to focus on big schools with the idea that you can get more bang for the buck if you do something with the local state school that graduates 300+ CS majors a year. The problem here is that many of the stereotypes for big organizations not being agile apply to colleges as well. The bigger the school, the harder it likely is to create change. Smaller schools might not graduate as many students, but they are probably more adaptable and open to change. Departments with <10 faculty members might not crank out hundreds of majors, but they can probably produce a few very good ones each year that you would love to have on your team.

This doesn't mean you skip on the big schools. If you can convince your local state school, the payoffs are huge. I would just urge you to cast a wide net. Any school with a CS department is worth reaching out to.

How To Start The Conversation

It probably isn't the best idea to just go to a college and tell them that they are doing things wrong and that you have ideas on how they can do it better. One way to start a conversation would be to say that you are interested in talking about pedagogy and their curriculum to get a better idea of what they do and how they do it. However, an even better idea would be to initiate the conversation by offering to do something for them.

Most colleges will have some type of venue for outside entities to give talks to their majors. Ask if they have a seminar/colloquium that you might be able to speak at. I am the current organizer for the CS Colloquium at Trinity and I would love to have speakers from industry offer to come to give interesting talks to our majors (hint, hint). I'm quite certain that I'm not alone. This gives you a great venue to say all the things that I mention in the next section, both to faculty and students, efficiently.

If you have a local Scala Meetup, you might see if they would be willing to send out an invitation to their majors for that as well.

What Should You Say?

This is where things are a bit more tricky. Most colleges do not view themselves as vocational training. They don't just put things in their curricula because companies would like it, though they do feel some pressure to make sure their graduates have the skills needed to find employment. College curricula focus on fundamental concepts, not tools because we know that technology is ever-changing, and our students are going to have to continue learning throughout their careers. We have to build the foundation for that learning. So the key is to show them how the tools that you are using represent the paradigms of the future of the industry.

With this in mind, your goal isn't to convince them to teach Scala, at least not directly. There is a reason that your company uses Scala, and my guess is that you believe that the reasons your company chose Scala are generally indicative of the way that a lot of the industry needs to move. It might be that you see that data keeps growing in importance and size and that processing that data in a way that scales is critical for the future. You know that using a functional approach is key to writing these types of applications both today and in the future. You know that while the framework you are currently using probably won't be dominant in 10 years, the ideas behind how it works and how you use it to solve real problems will still be folded into the frameworks of the future.

I would say that your first goal is to establish that in the highly parallel and distributed environment we live in, the functional paradigm has moved from an academic play area into an essential real-world skill. You might also tell them why the pragmatic approach of Scala makes sense for bringing the functional paradigm into practical usage. Highlight how it impacts your use case and how you see that being a use case that will only grow with time.

Going back to the fact that colleges want to teach things that are foundational, I would point out how many other languages are following in the footsteps of Scala. This is key because it makes it clear that learning Scala isn't just learning one language. Instead, it is opening the door to where most older languages are evolving as well as where many new languages are going. Knowing Scala helps future-proof students in this regard, and Scala isn't just sitting still and getting older. It is a dynamic language with creators who are working to keep it on the cutting else of language development while maintaining an eye to the practical aspects of what makes a language usable in industry.

If you get to the point where they are convinced but aren't certain how to put Scala and functional programming into their curricula, point them to academics who have been using Scala for teaching. I would personally welcome all such inquiries. Just tell them to write to mlewis@trinity.edu. In the US both Konstantin Läufer and George K. Thiruvathukal at Loyola University Chicago have experience in teaching with Scala and have an interest in spreading the word. So does Brian Howard at DePauw University. While Martin has the most experience of anyone teaching Scala, personally, I'd rather he focus his time on Scala 3 development. Also, while Heather might be a freshly minted academic at CMU, she is definitely a Scala expert and her experience with the MOOCs means that she has taught more people Scala than almost anyone in the world.

Going Further

Giving a talk every so often is a nice way to let schools know what types of technologies you use and why you see them being important in the future. As such, it can provide a subtle way to influence the curriculum. However, if you are willing and able to put in a bit more time, you can have a more explicit impact on what is being taught by offering to teach a course as an adjunct professor the way Austin is doing at UT Dallas and the way Twitter does at CU Boulder.

The challenge with this approach is that not everyone is a naturally gifted teacher and it will be a fairly significant time sink. Technically, whoever does it will get paid, just not that much. (Teaching one class as an adjunct pays well under $10,000 dollars and is often as low as $3000.) The real question is whether your employer is willing and able to let you do this. I will note that for various reasons, a developer doing this who doesn't have a Master's degree will need to be reasonably senior to make it work at a lot of Universities.

The upside of teaching a course is that you can probably get into an upper division course where you have fairly complete control over the curriculum and you can teach exactly the topics that you would most like new hires to arrive with. Due to growing enrollments, the vast majority of colleges are likely quite open to having an adjunct help out, and many will be open to an interesting upper division offering as they can probably move faculty around to cover other courses. (At Trinity, we are looking for an adjunct for next fall and while they are currently scheduled for an introductory course, if someone wanted to come in and teach my Big Data course using Scala, I could certainly switch to the introductory course to make things work.) Of course, if you want to teach CS1/CS2 with Scala, I've got a bunch of materials you can use to help organize the class.

You might also see if the department has an advisory board. Having a seat on such a board will give your company insight into the inner workings of the department and how they think about the field while also giving you a venue to talk about what you value and where you see the future of the field going.

In The Meantime

Until other colleges catch on to the value of Scala, remember that at Trinity University we are teaching all of our students both Scala and how to think functionally, so let me know when you are looking for interns or junior devs.

Sunday, August 20, 2017

Want to make America Great? Go learn something new.

"Make America Great Again" is a slogan that all Americans are inevitably familiar with these days. Whether they love hearing it, or roll their eyes at it, everyone has heard it many time. While I am not opposed to the argument that America is currently great, the reality is that it doesn't feel very great for a lot of people because of the way our economy has changed. The US stock market has actually been on a very nice climb since around 2009. Nothing really special has been happening in 2017 other than perhaps we have a President who likes to claim credit for everything positive, even if he didn't have anything to do with it. The problem is that the gains from those stocks and the corporate profits that have driven them there aren't trickling down to a large fraction of the population.

Unfortunately, the "plan" from the Trump administration on how to fix things in America was largely to try to roll us back to the 1950s. He promised to bring back jobs in things like coal mining and manufacturing. Of course, those promises are all empty. You can't just roll back the clock and proclaim that the economy is going to work the way that it did decades ago. Things have changed. Technology has changed them.

There are a variety of reasons that lots of people feel left behind in the current US economy, but the one that really stands out to me is the skills gap. Technology has changed the economy, but not enough people have stayed up with technology to keep themselves relevant in the current job market. The manufacturing sector is an area often sited for people losing jobs, but the reality is that US manufacturing output is at an all time high. The thing is, the factories don't need as many workers, and the worker that they need are ones with higher level skills. This was highlighted in the article "In US, factory jobs are high-tech, but the workers are not".

As that article mentions, what US employees need is training, but US employers aren't prone to pay for it. Neither is the Republican leadership we currently have running most states as well as the federal government. There are some serious challenges with training everyone as well. A number of these were outlined in "Technology is setting us up for a training crisis".

The thing is, we really have two problems. The one we are feeling right now is that labor is losing out to capital in terms of employer spending, so money isn't trickling down, even when stock prices and company profits are really high. The other problem is that the lack of skilled people is setting up a really bad dynamic where the top companies become superstars. This problem was highlighted in the MIT Technology Review article, "It Pays to Be Smart". That article had the much more informative subtitle of , "Superstar companies are dominating the economy by exploiting a growing gap in digital competencies".

This article addresses the fact that for a number of years now, economists have worried about a stall in the growth of productivity. This is significant, productivity growth has been the key driver of economic growth and the increase in societal wealth for about as long as humans have had organized civilization. This slowing of productivity has also seemed very counter intuitive because during this time digital technology has grown by leaps and bounds and has changed our lives in many very fundamental ways. What "It Pays to Be Smart" discusses is a different analysis that looks only at the top performing companies in each segment of the economy. It turns out that the top companies still have great productivity growth. All the other companies are lagging behind though, and putting a drag on the economy. Why is that? Well, the top companies are doing a really good job of harnessing digital technology, while everyone else struggles to do so in an effective way.

Why is it that the top companies can use technology efficiently while most other companies struggle? To my mind the answer is once again the skills gap. In this case, a shortage of people with sufficient skills to really harness technology in a way that improves productivity. To put it another way, the small companies suck at technology because there are only so many people with the skills, and the big companies have the money to attract or buy out that small population of people. There are certainly other factors like the growth in the importance of data and the fact that large pools of data are worth a lot more than small pools of data (some of these are discussed by the Economist in "The world's most valuable resource is no longer oil, but data"), but it seems to me that the real challenge for small businesses that want to succeed is that they have to somehow attract and retain good people with high end skills who can bring the efficiencies of modern digital technology to bear. This would be a lot easier to do if there were more such people.

Based on this reasoning, the fundamental problem holding back our economy today isn't anything related to foreign competition or immigrants, it is a lack of people with the right skills to drive the economy forward. So if you believe that America has lost its greatness, and you want to actually do something the make America great again, go out and learn something new. Thanks to the multitude of online learning sites (see links below for some), it has never been easier or cheaper to pick up new skills. It does take effort, but if you really want to improve America, sitting around complaining about the loss of the jobs of old isn't going to do it. Putting in the effort to learn new skills yourself is the activity that I truly believe will have the greatest long term benefit, and unlike many other things, your learning is something you have control over.

Online Learning Sites

I'm listing some of the big ones here. If you think I missed one, let me know in the comments and I'll add it.

Sunday, June 18, 2017

How will technological unemployment impact birthrate?

This blog post is a short exploration of a thought that hit me about how birth rates might be impacted by the changes in employment caused by future technological change. I consider technological unemployment to be a significant issue as AI and robotics continue to become more capable. I'm not going to support that belief in this post, as that is many posts worth of material. I will assume that to be the case here, and briefly explore one impact of it that I hadn't thought of until recently.

Birth rates have been decreasing in the developed world so much that in many countries, the death rate is larger than the birth rate. According to the CIA World Fact book, there were 26 nations in 2014 for which this was the case. Japan (-1.8 net) and Germany (-3.1 net) are the ones most commonly mentioned, but there are many others. A nice treatment of this can be found at https://ourworldindata.org/fertility/. There are a variety of reasons for this. Historically, the first was probably lower infant mortality rates. The more significant ones to me though are the ones that center on the increase in women's rights. As women gain more control over their their reproductive rights, they generally choose to have fewer children. In developed nations we are also seeing a decline in marriage rates and more women entering the workforce. Both of these tend to further reduce the birth rate. It is the last factor mentioned that I want to focus on, as it is the one most related to technological unemployment.

It is worth taking a few sentences to explain why these thoughts popped into my head. I often debate the future impacts of technology with a friend who we will just call by his last name, McLane. He doesn't believe that technological unemployment will be an issue, and one of the many reasons he has sited is articles about there not being enough people to work in countries that have low population growth. He is correct that there are a number of countries providing incentives for people to have kids because they are worried about not having enough people to care for the elderly and keep the economy moving forward. Given the variety of reasons for lower fertility rates, I can indeed imagine a day when the global population goes into a tailspin simply because there are no kids born.

So let's assume that McLane is wrong and we get technological unemployment and have to do something, possibly a basic income, so that a large fraction of the population can live a life with dignity is a world where they can't do anything to earn a living wage because everything they have the skills to do can be done better and cheaper by machines. What happens to the birth rate at that point?

I'm going to by optimistic and assume that in such a world, women have at least equal power in society relative to men. I also assume that women maintain general control over their reproductive rights, and generally have all the freedoms of men. (I can actually imagine women having more power at that time as there are indications that technological unemployment so far has hit men harder than women, so if more women have jobs than men for any period of time, they could be the gender with greater social power.) Still, you get a different dynamic when people aren't chasing careers. Right now both men and women often put much of their personal lives on hold until they are well established in a career. If we have a social structure where machines are doing so much of the work that people live comfortable lives without having careers, that changes.

One of the standard arguments that I hear "against" technological employment is that people get lots of meaning from their jobs. I use quotes, because that isn't really an argument for why it won't happen, just something that could cause problems if it does. Personally, I think that people can find meaning in lots of other things, like hobbies or families and friends, if they don't need to work to live a comfortable life. Let's be honest, lots of people today have rather meaningless jobs and they already get much more of the meaning for their lives from other activities.

Given that, I can see a scenario where the birth rate goes up, and people decide to spend a lot more of their time raising children and focusing on family life. Of course, I can also imagine a world where everyone is just watching things on their screens, they have little physical contact with other humans, and the birth rate continues to sink. I'm wondering what other people think? If technology takes away the need to chase a career, will birth rates in the developed world start to rise again, or will we be so far into a situation where we enjoy our technology that the idea of having kids and a family will continue to decline?

Wednesday, April 19, 2017

Does Big Data Make Centralized Control the Better Option?

I grew up in the waning years of the Cold War. I remember doing a project in High School on the possibility and repercussions of nuclear war. I got to see the fall of the Berlin Wall and many other events that went along with the crumbling of the world's other superpower as their system of government, communism, proved to be inferior to the combination of democracy and free market capitalism.

One of the standard arguments I recall for why communism failed was that it was inefficient and wasteful, particularly when it came to the allocation of resources. A standard example was that local farmers knew better when to plant and harvest than the central government, but that in the USSR they had to do those things when the federal government told them to. By contrast, in the US, farmers would make decisions based on their experience to try to maximize their yields. They did that because that was how they made money. A year of bad yields would be an economic hardship in the best case and possibly lead to losing the farm.

A thought that struck me recently is that this particular argument against communism might not apply anymore. The reason is that we have moved into the era of "big data". In fact, I wonder if centralized control might not have the advantage these days, assuming that centralized control does things in an optimal way.

To illustrate this, consider the farming example I just gave. The local farmers have a lot of experience they can fall back on, but these days I'm guessing that even better decisions for allocating resources could be made using a data set that included crop yields across an entire nation for many years, combined with detailed weather data during that same time and other potentially relevant information.

Consider this article about a company that uses predictive AI to place orders in advance to improve shipping efficiency and reduce the number of returns. It is just one of many examples of how computer systems can now make decisions much better than humans are capable of because they can pull in much larger quantities of data.

Of course, the key here isn't centralized control, and I'm sure that many people would argue that centralized control still fails because it lacks the motivation to do well that individuals have. In that regard, this isn't a call to switch to communism. Instead, the key here is the data, and this is an area where government can play a role. Even better than one big centralized decision making process is a system where everyone has access to all the relevant data, and they can all try out various ways to process it to make optimal decisions.

In that regard, I think that government could play a role by making data available and helping different groups to make their data accessible and consistently formatted so that it can be more broadly used. This doesn't just apply to crops with data on weather, planting dates, harvesting dates, and yields by location, this could also be useful for a lot of data related to health including air and water quality and potentially consumption habits. I don't have a full mental image of what exactly this looks like in a broad sense, and I can clearly see challenges related to privacy issues. Still, I feel that we need to push for making more data generally available so that individuals and companies can utilize it to make better decisions. Regardless of where the control comes from, the way forward for efficient decision making is clearly availability of useful data.