Swiftly and Strongly
The new Swift programming language announced by Apple at WWDC has generated some pretty heated discussions on public forums such as Twitter. I think this is a good thing: better to have people passionate, and hopefully providing the feedback that Apple elicited at the conference, than to get a colossal ‘meh’ from the developer community.
And there are plenty of aspects of Swift to get passionate about, on both sides of the argument. There are those who have a deep love for Objective-C; for dynamicism, self documentation, and relative simplicity. And there are those who clearly yearn for something new, who are appalled by the rough edges of the existing language, and for whom Swift is a major step forward in safety and expressiveness.
Me? I’m a bit on the fence. There is no question the basics of Swift are cleaner. Small scripts are nicer to read, and will appeal to developers familiar with JavaScript, Ruby, and other scripting languages. I also like the type inference. Why repeat your types if the compiler already knows what they are? This prevents a lot of time wasting that traditionally burdens strongly-typed languages like Java and C++.
But there has been a clear shift in Swift to strong-typing, including complexities like generics, and I wonder whether this is the best choice for a language primarily designed for application development. (Perhaps Apple sees this as a catch all language, which may be a problem in its own right.)
The classic argument for strong typing is that it isolates more bugs at compile time. This has a cost, because you generally need to write more code to explicitly declare types. A scripting language like Ruby or Python can be up to 5 times more compact that similar code written in the mother of all strongly-typed languages, C++.
And I’m not sure I even buy the whole 'catching bugs early’ argument. Strictly speaking, it is true, but is the payoff worth the cost? I can’t remember the last time I accidentally put an NSNumber in an array intended to contain NSString instances, but if it happened, I’m sure I caught the problem at run time the first time I tried to exercise the code. In other words, compile time checking may have saved me a 5 second compile-and-run cycle. If I encounter a bug like that every month — which is probably an overestimation — I am writing a lot of typing information for a very small gain.
I’m actually a bit surprised that this whole typing question has come up again, because I had assumed that these arguments had played out in the 90s, and dynamic typing had proven itself through powerful frameworks and languages written in Ruby, Python, and — yes — even JavaScript. Half the web runs on the stuff now.
The argument of dynamic language aficionados has always been that strong typing just gives a false sense of security. Without decent tests, no language is safe, and you are less likely to test well in a strong-typed language, where passing the compiler is often perceived to be adequate.
There is one area where strong typing does have a definite benefit, and it’s where I built up quite a bit experience with the paradigm: High-Performance Computing. The compiler can often generate more efficient code when it knows data types explicitly, particularly for primitive types like integers and floating-point values. If Apple’s intention is to eventually use Swift as a systems language, perhaps replacing C in an envisaged future OS, then strong typing and generics make sense. For day-to-day app development, not so much.
On the whole, I’m excited about Swift, as much because it shows Apple is on its game, pushing the envelope, as much as for the language itself. I worry a little that it has been influenced too much by languages like Java, C++, and C#, which in my view are actually sub-optimal in many areas when it comes to app development. But who really knows? Time will tell, and it will certainly be an interesting ride.