Recently on Shades Of Brown, the question of “what is a computer?” came up in regards to the new iPad Pro. I went off on how input method is not the deciding factor in how much of a computer the device is. Rather what decides if a device is a computer is the ability to create and manipulate data1.
My thoughts came from the dozens of discourses I’ve seen online about how a mouse driven GUI or a terminal are the empirical input methods for computer and anything else is lesser. For that basis is how we consider if a device is a computer and only that basis.
I prefer my definition of computing that avoids any discussion of input method as input methods are flexible and can change. Theoretically a touchscreen can do the same actions as a mouse and keyboard2, the input method does not change the data being created or modified. Of course existing operating systems scale back their efficiency or even ability to do tasks when input method changes, those are artificial limits placed by the maker and not inherent to the input method itself.
The comes up in the discussion of the new iPad (and to an extent the discussion of Chromebooks) as the narrative is that “this isn’t a real computer, I can’t get real work done on it.” The generalizations of what is work and what is real is the core issue. For most office work, especially work that leans towards data entry, one could get away with using a phone for their work. Not an ideal use case, however going by the narrative phones get a checkmark in the “is it a computer?” box. For those who do creative3 work neither phones nor tablets nor Chromebooks would get that task done. No checkmark in the “is it a computer?” box.
What differentiates the task of creative work versus general office type work is not the form factor though, it is tech companies desire to limit the abilities of a device based off of the form factor. An iPad cannot edit 4k video as efficiently as macOS because Apple has not built Final Cut Pro for iOS and does not allow for developers to make professional software as powerful as desktop software.
All of this is fine, mobile computing is going to become more open and powerful as time goes on. What troubles me is the stigma with tech folk that even though a terminal and touchscreen device are currently around the same level in terms of “tasks that can be done solely with this input method” we still act as if the terminal is more powerful. The terminal is more powerful for programing and micro managing a computer, touch input is better for creation and consumption.
I want to fight strongly against any narrative that assumes programing and CLI file management somehow are more important that creating and consuming non-text based content. It carries a gross ignorance of stereotypical “computer nerd” tasks being the pinnacle of computing. That only “real” computer users know their way around a terminal. It creates a culture and environment in which non-technical users are looked down upon which makes software that is shittier and harder to use.
Almost a paradox at this point, touch input computing is not getting better at creation because we assume they are not real computers. But in order for touch software to get better we have to start assuming they are real computers.