Provider” is a JavaScript programming pattern that I’ve found myself using more and more of late.

A Provider is an asynchronous function that can be called to provide a particular resource, if there is a need for it.  The Atom functions .need() and .provide() exist for just this purpose.

var
  a = atom.create(),
  need = a.need,
  provide = a.provide,
  json2src = 'https://raw.github.com/douglascrockford/JSON-js/master/json2.js'
;

// If JSON is built-in or already loaded, use that.
// Else, load Crockford's.
provide('JSON', function (done) {
  if (typeof JSON !== 'undefined') {
    done(JSON);
  } else {
    loadScript(json2src, function () {
      done(JSON);
    });
  }
});

need('JSON', function (JSON) {
  // Safely call JSON functions in all browsers.
  console.log(JSON.stringify(document.location));
});

Providers in Atom are nice because:

  • They only get invoked if necessary — that is, if there is a need and if the property is not already set — so you benefit from lazy loading; and,
  • They get invoked at most once, even if multiple consumers call .need()
  • Resources are just Atom properties, so you can use them with other Atom functions, too.

More about Atom:

Advertisements

Any JavaScript library that claims to help with “asynchronous control flow” must at least enable easy patterns for running asynchronous tasks in parallel, or in series.

With Atom, task parallelization is easily accomplished using .set() and .once().  (Note that this is an application of the Barrier pattern.)

var
  a = atom.create(),
  once = a.once,
  set = a.set
;

ajaxCall('/me/friends', function (friends) {
  set('friends', friends);
});

ajaxCall('/me/games', function (games) {
  set('games', games);
});

once(['games', 'friends'], function (games, friends) {
  alert(games.length + ' games and ' + friends.length +
    ' friends were fetched in parallel.');
});

And task serialization is a cinch with .chain(). Each Atom instance has an asynchronous queue of functions. With each call to .chain() you can append one or more functions to the end of the queue.

a.chain(

  // First, fetch config info from the server
  function (next) {
    ajaxCall('/config', function (config) {
      // Signal that this task is done, and pass along the config
      next(config);
    });
  },

  // Second, build the UI
  function (next, config) {
    buildUI(config, function (ui) {
      next(config, ui);
    });
  },

  // Third, wait for user to select an option
  function (next, config, ui) {
    ui.on('select', function (choice) {
      switch (choice) {
      case 1:
        // ...
      }
    });
  },

  // ...
);

For more about Atom, see:

In May, I released a small JavaScript library called Atom.  Though only a couple of Kb minified, it is nonetheless very useful for a number of things, and my team at Zynga has been using it for almost all new projects we’ve started in the past few months.

In this post, I want to demonstrate a common and useful pattern that we use Atom for, which we call the Barrier pattern.

The essence of the Barrier pattern is that we have some code that we want to be run only after a certain set of conditions are met. To accomplish this, we need use only two Atom methods: .once() and .set().

var
  a = atom.create(),
  once = a.once,
  set = a.set
;

once(['cleanedRoom', 'brushedTeeth'], function () {
  sayGoodnight();
  goToBed();
});

In the example above, the two conditions are represented by the ‘cleanedRoom’ and ‘brushedTeeth’ properties of an atom instance. We don’t care which order they are completed in — only that they are both completed as a prerequisite to saying goodnight and going to bed.

The Atom method .once() lets us register a callback that gets called as soon as some combination of properties gets set, so it suits the purpose of the simple barrier above nicely.  However, we also have access to the value of the properties, which can be useful:

if (typeof jQuery !== 'undefined') {
  set('$', jQuery);
} else {
  loadScript('//code.jquery.com/jquery-1.8.3.min.js', function () {
    set('$', jQuery.noConflict());
  });
}

once('$', function ($) {
  $(function () {
    set('body', $(document.body));
  });
});

once(['body', '$'], function (body, $) {
  body.append('We have jQuery and can start manipulating the DOM!');
  // ...
});

This example is a robust usage of jQuery, that does not depend on jQuery already being loaded prior. If jQuery is not detected at the start, we make an asynchronous call to load it. Either way, as soon as we’re sure it exists, we set the ‘$’ property.

With Atom, when you use .set() to set a value for a property, it will trigger any listeners for that property immediately. So we have not only set up a barrier for code to wait for jQuery before executing, but we also provide a safe reference to it (on the highlighted lines).

I hope it’s easy to see how the barrier pattern as enabled by Atom can easily be used to make sure that your code gets run as soon as the necessary prerequisites are set, and not before. For more information about Atom, read the README.

I’m happy to announce that with the support of my employer, Zynga, I’ve just released an open-source component called Atom.

Atom is a small JavaScript class that provides asynchronous control flow, property listeners, barrier pattern, and more.  It is easy to include in any JS project, liberally licensed (BSD), cleanly coded and documented, and includes unit tests.  We’ve been using it internally for a handful of projects, and it is a good fundamental building block to help simplify complex application logic, especially in a highly asynchronous environment.

Check it out!

Blogged with the Flock Browser


I have voted in every federal and provincial election for which I’ve been eligible. Each time I have voted for the same respective parties.  And my vote has NEVER resulted in a seat being won, or had any influence over government policy whatsoever.  Looking this morning at the preliminary results of yesterday’s BC Election, I see this trend has continued: Liberals 49, NDP 36, Green 0.

Not that I’m surprised.  It’s an artifact of our First-Past-The-Post electoral system that the majority of votes translates into ALL the seats, and fringe parties never have any voice.  While the Greens have enjoyed a popular vote of 8-15% in most recent elections — even though those voters must know that their votes are essentially “wasted” — they never win seats.

The 8-15% statistic is of course misleading.  How many would have voted Green, but didn’t because they considered it a wasted vote?  If everyone voted sincerely rather than strategically, I suspect Green support would have at least double those numbers.  But as long as there are strategic incentives to vote against your ideals, election results will never truly represent the will of the electorate.

So even though I expected my Green vote to be inconsequential, as usual, when I went to the polls yesterday, I was nonetheless optimistic about the BC-STV referendum and what it might mean for future elections.

BC-STV: What could have been

BC-STV is (was) a proposed reform of our provincial voting system.  Similar systems are in use quite successfully throughout most of Europe.  [Edit: As pointed out by a commenter, this is misleading.  Proportional Representation is used widely throughout Europe, but not STV specifically.  Anyway…]  Under BC-STV, voters would rank candidates in order of preference, and elected representatives would end up being proportionally very similar to the votes cast.  It’s a system that produces provably fairer results.  It would have meant that more people’s votes would have an effect on seats won and on government policy.  In short, would be more democratic.

But STV got trounced in the referendum.  It needed 60% to pass, and it looks like it’s only going to get about 38%.

The reason?  Under BC-STV, it’s too complicated to explain exactly HOW your vote will count.  I said above that the system is “provably fairer”… but the problem is that the proof is not at all simple to follow.  Most voters were confused by it, and voted not to switch.  (Ok, I’m glazing over some details here.  There were other complaints about BC-STV as well, but the complexity issue was really the killer.)

The outcome is hugely frustrating for those of us who took the time to study up on BC-STV.  Most of those NO votes are almost certainly cast by people who just didn’t bother to learn how it works.  But at the same time, we probably shouldn’t be all that surprised.  After all, how hard can we really expect people to study for an election?

Approval Voting: What could be

As it happens, however, there’s a system that would both provide fairer results AND be just as easy to understand as the current FPTP.  It’s called Approval Voting.  Under AV, each voter can vote for as few or as many of the candidates as they wish.  The winner is the candidate with the most votes.


That’s it.  It doesn’t really get simpler than that.  But this simple system has some great benefits that would improve election results AND campaign quality:

  • Easy to understand.  It’s only a minor change from FPTP as far as voters are concerned.
  • Easy to use.  The number of spoiled ballots would probably be even lower than with FPTP.
  • Easy to tally.  Cost of running elections would not increase (compared to BC-STV, for example).  Most vote tallying systems, whether manual or automated, could be adapted to it with relatively little effort.
  • There’s no incentive to vote insincerely (“strategically”).  For voters who are motivated most by voting AGAINST the candidate they LEAST want to win, they can accomplish this without sacrificing a vote for who they MOST want to win.
  • Cleaner campaigns.  There’s greatly reduced incentive for candidates to engage in negative campaigning or attack ads.

Sounds good, right?  Next time BC has an opportunity for electoral reform, Approval Voting is what I’d like to see on the referendum.  That might not happen any time soon, so in the meantime I’ll continue to vote in every election, and most likely have my vote not count.

Blogged with the Flock Browser

I wrote this essay in December 1998 for PHIL342a: Minds and Machines, an excellent course taught by Prof. Charles Morgan that I took as part of my undergraduate degree. I’m now blogging it for posterity.

Introduction

First off, it is necessary to attempt to characterize what we mean by mental capacities. There will likely never be a universally agreed-upon definition of this term, or of intelligence, mind, sentience, consciousness, etc. There have been many attempts, and they vary greatly. Yet all of these terms are generally understood to be related. One way to ask the question, “What are mental capacities?”, is to ask, “What would an entity have to be like in order to justify the attribution of mental capacities?”, where “entity” could be human, machine, extra-terrestrial life form, or what have you. This question, too, has been answered in many different ways. For some people, intelligence is something that humans and humans alone can have, by definition. Others are willing to attribute mental capacities to the lowliest of creatures or even to plants.

Alan Turing, in his landmark 1950 paper entitled “Computing Machinery and Intelligence”, proposed to answer a very similar question to the last one above. He argued in essence that if a machine were able to fool a human into thinking that it were also human, then we would be justified in saying that it could think.1 Out of this sprung the popular “Turing Test” for intelligence, which involves a machine candidate, a human confederate, and a human interrogator. The three participants are isolated from one-another. The machine and the human confederate are not allowed to interact; however, the interrogator may communicate freely with both the human and the machine via a computer terminal by which textual messages may be sent back and forth. It is the interrogator’s task to determine in a limited amount of time2 which one is the machine, and which one is the human. If the machine is able to convince the interrogator of its “humanness” as often as the other human is, then the machine is considered to have passed the Test.

Turing observed that his Test may place the machine at an unfair disadvantage. Is it really necessary to demand such faithfulness to human-like intelligence, in particular? Isn’t there a more general test which a machine might be capable of passing without in fact having to pass itself off as human? Perhaps being able to pass the Turing Test is not necessary for the attribution of mental capacities, but surely it is sufficient, or so he argued. (op. cit.: 55)

Probably the most famous instantiation of the Turing Test is the Loebner Prize, an annual competition first run in 1991, in which computers vie for the title “most human-like”. The following is an excerpt from the transcript of the 1998 winner of the Loebner Prize, Robitron Software Research Inc.’s “Albert One”:3

Interrogator: what do you work as?
Program: I’ve worked on a lot of PC’s and some mini computers and mainframes when I was in college.
Interrogator: what did you do at uni?
Program: You should worry more about what you mean.
Interrogator: what do you mean?
Program: You should worry more about what you mean.
Interrogator: and what does that mean?
Program: Tiberius
Interrogator: okay, well if you’re going to be completely incoherent how am I supposed to have a conversation with you?
Program: Do you like Mexican food?
Interrogator: never had any. do you like oysters?
Program: I can certainly hope that any. You aren’t going to disappoint me I hope?
Interrogator: what do you mean by that?
Program: I don’t know what I mean. You tell me what you think and I’ll tell you if I agree or not.

Needless to say, this is not an altogether impressive performance, and indicates that the current state of the art in conversational software is still not very advanced. It certainly falls far short of the sort of dialog that Turing anticipated: (op. cit.: 60)

Interrogator: In the first line of your sonnet which reads “Shall I compare thee to a summer’s day,” would not “a spring day” do as well or better?
Witness: It wouldn’t scan.
Interrogator: How about “a winter’s day”? That would scan all right.
Witness: Yes, but nobody wants to be compared to a winter’s day.
Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
Witness: In a way.
Interrogator: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison.
Witness: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.

This exchange is far more convincing. Yet even this level of competency wouldn’t satisfy everyone. Gunderson (1964), among others, has made the claim that the Turing Test measures too narrow a range of human behaviour. Such complaints might be paraphrased: “Hold an intelligent conversation? Is that all it can do?! Then it can’t possibly be intelligent!” In particular, there is the sentiment that having a more human-like means of interaction with the physical world should be required; for instance, being able to recognize objects visually. Even Hugh Loebner, sponsor of the aforementioned prize, has high expectations for the as yet unawarded $100,000 grand prize. In contrast to the yearly $2000 “most human” winners, the grand prize winner will have to deal with audio/visual input. 4 (Loebner 1994)

My thinking machine needn’t be able to throw a football or recognize an orange, because while those activities do require humans to perform some mental computations, they do not exemplify what I consider to be our most important abilities. I agree with Hofstadter, who lists the following mental faculties as minimum requirements for intelligence (1979: 26 — I enumerated the list for ease of reference):

  1. To respond to situations very flexibly;
  2. To take advantage of fortuitous circumstances;
  3. To make sense out of ambiguous or contradictory messages;
  4. To recognize the relative importance of different elements of a situation;
  5. To find similarities between situations despite differences which may separate them;
  6. To draw distinctions between situations despite similarities which may link them;
  7. To synthesize new concepts by taking old concepts and putting them together in new ways; and,
  8. To come up with ideas which are novel.

There are some noteworthy omissions from this list, not the least of which is the capacity for sophisticated language use! This is not to say that sophisticated language use is not important, merely that I do not consider it necessary for the attribution of mental capacities since I can imagine beings without it that I would still consider intelligent. Nor is there an item requiring audio/visual capacities or any other specific means of interaction with the world. Some means of interaction with the world must be assumed, of course, but there is no stipulation as to what form this must take. It may be as complicated as a diverse set of sensory-perceptual systems similar to that of humans, or it may be as elementary as a dumb terminal. Language, at least, does provide us with a means of determining whether Hofstadter’s requirements are met by a particular candidate, so it could be seen as a necessary component in the absence of any other complicated means of interaction.

The programs claimed to have passed the Turing Test for intelligence have not yet come close to satisfying Hofstader’s criteria. They are remarkable in their ability to contend with a variety of situations that arise in superficial conversation with forgiving interrogators, but their shortcomings are obvious as evidenced by the dialog above. Even the simple (for a human) ability to make analogies is lost on most conversational programs. Unfortunately, the interrogators often don’t know what to look for, and are satisfied with whimsical, incoherent conversations that change topic every sentence or spit back the interrogator’s own remarks with pronouns reversed:

Interrogator: I think you are a brainless automaton.
Program: Why do you think I am a brainless automaton?
Interrogator: Because you parrot everything I say.
Program: Does it bother you that I parrot everything you say?

Compare this to Turing’s poetry dialog from earlier.

To suggest that mental capacities either exist or they don’t in a given candidate is certainly misguided. There are obviously varying degrees of intelligence, and no sharp borderline between intelligent and non-intelligent. In addition, the “intelligence” of some machines is exhibited in only a restricted domain, such as chess or the works of Shakespeare. It is possible that some of these expert systems could meet Hofstadter’s criteria within their limited domain. If the domain is too limited, we would not want to attribute mental capacities. Instead, we require a more general intelligence. So, we may be more confident in our attribution of mental capacities the better the candidate is able to meet Hofstadter’s criteria in a very large domain. Exactly how large is open to debate.

The Argument For

Proceeding with the idea that mental capacities exist if Hofstadter’s criteria are met within a sufficiently large domain, we will look at what I will call the Argument by Mental Faculties:

a. Meeting Hofstadter’s criteria is sufficient for mental capacities.
b. It is possible to design a machine that meets each of Hofstadter’s criteria.
c. Therefore, it is possible to design a machine with mental capacities.

I consider this argument to be the “best” one in favour of the conclusion, not because it is the most obvious or the easiest to support, but because I think that it offers the most insight into the meaning of mental capacities. It is not based on a characterization of mental capacities as simply “what humans do”, but rather on a distilled set of behaviours. In support of premise b, we’ll now look at the eight mental faculties in question.

1. The impression of computers as unbending, rule-following automatons is completely justified: that’s exactly what they are, looked at from the bottom up. But it is misleading to suppose that this low-level adherence to rules necessitates higher-level inflexibility. Flexibility can be programmed, though it may sound like a contradiction in terms. Indeed, teaching computers to become more flexible has been a main thrust of AI research since its inception, and not without some measure of success. Some excellent examples can be found in modern computer games implementing complex interactive 3D environments, in which computer-controlled opponents exhibit varied and seeming intelligent behaviour in response to the player’s actions.

2 – 4. Taking advantage of fortuitous circumstances, dealing with ambiguous and contradictory data, and identifying salient features of a situation; all these abilities involve having some sort of a conceptual model of the environment or situation, being able to recognize changes in it, and being able to assign significance to different parts of it. For humans, this is called “common sense”. When it comes to computers, though, common sense is not so common. The most straight-forward approach to endowing a computer with common sense is being taken by Douglas Lenat, whose CYC project aims to explicitly “prime the pump with the millions of everyday terms, concepts, facts, and rules of thumb that comprise human consensus reality”. CYC has been in development since 1984. By 2001, it is hoped, it will have accumulated a critical mass of common sense, and will be ready to start learning on its own, “by automated-discovery methods”, thus creating a snowball effect of knowledge acquisition. (Lenat 1997: 201-3) Lenat’s method is regarded as something of a brute-force approach, but it will nonetheless result in a program with a pretty good knowledge about how the world works, and thus the ability to model a broad range of real-world situations effectively.

5 – 6. The ability to form analogies, which is what these two criteria signify, is considered by psychologists to be a central feature of human intelligence. This is evidenced by the fact that a considerable portion of the questions on any standard IQ test isolate this skill in the form of either verbal or visual analogy completions. (Yet the Turing Test makes no such demands on the candidate! — at least not implicitly.) Hofstadter’s more recent work (1995) has dealt largely with computer models of analogical thought. Of particular interest are his programs Seek-Whence, which solves analogy puzzles using sequences of integers,5 and the more elaborate CopyCat, which watches the user make some sort of change to a string of characters, and then performs an analogous change to a different string.6 The domain of these analogies is much more limited than what a human is comfortable with, but it at least serves as a proof of concept.

7 – 8. These last two criteria have to do with creativity. Everyone knows that machines are utterly uncreative, right? A passage from Turing may be enlightening at this point. Turing anticipated a variety of rebukes to the idea of machine intelligence, one of which he called Arguments from Various Disabilities. Such arguments make the claim, “you will never be able to make a machine do X,” where X can be any of a number of behaviours including “do something really new”. However, he noted, no support is ever offered for such claims. With subtle humor, he continues:

I believe they are mostly founded on the principle of scientific induction. A man has seen a thousand machines in his lifetime. From what he sees of them he draws a number of general conclusions. They are ugly, each is designed for a very limited purpose, when required for minutely different purposes they are useless, the variety of behaviour of any one of them is very small, etc., etc. Naturally he concludes that these are necessary properties of machines in general. (op. cit.: 61)

Computers have written short stories, novels, and classical music, as well as created original drawings. These programs often come up with “ideas” completely unforeseen by their creators. The quality of their creative works may be disputed, but again we have a proof of concept.7

It will likely be quite some time before we see a machine capable of meeting Hofstadter’s criteria in any domain as broad as that in which humans operate — one to which we would attribute mental capacities. Nonetheless, each of the criteria has already been met in limited domains.

By boldly and explicitly stating a set of qualifying abilities for mental capacities, the Argument by Mental Faculties not only narrows our focus down the the important issues, but also suggests research directions.

The Argument Against

One thing has been monstrously difficult to reconcile in any formal way with the possibility of designing mental capacities into machines: the subjective experience or feeling of mind, that “inner life”, that sense of being or sentience—consciousness itself. How can a sequence of neural firings or any other physical process create sensations like the taste of a strawberry? While we might be able to create a robot capable of running a thorough chemical analysis on a fruit sample, determining the proportions of the constituent compounds and thus identifying the fruit as a strawberry rather than, say, a kiwi, it is not at all clear that we could also program in the experience of taste, or of color, or of elation, or of satisfaction. Suppose our robot spoke in the first person only because that’s what it had been programmed to do, and that in fact it had no sense of self, no inner fire, no idea of what it is like to “be”. Would it be fair to attribute mental capacities to our robot, if we knew this? Most people would be inclined to say “no”.

This so-called mind-body problem (as well as the related other minds problem8) has stumped an astounding number of thinkers. In his 1997 book How the Mind Works, arguably one of the most comprehensive and authoritative modern assessments of the matter indicated by that title, Steven Pinker can only mark the issue of subjective experience as a problem which “continues to baffle the modern mind”. (Pinker 1997: 558) Noam Chomsky suggests that some problems may simply fall outside human cognitive capabilities, due to biological restrictions. For these problems he uses the hyponym “mystery”, as they are bound to forever remain unsolved by humans. He speculates that mental experience may be one such mystery. (Chomsky 1993: 44-6)

The poor general understanding of this issue has naturally led to some doubts as to whether subjective experience is in fact replicable in a machine. We often see the following Argument by Consciousness, a simple extension of the other minds problem to machines:

d. Subjective experience is a necessary condition for mental capacities.
e. It is impossible to verify that a machine has subjective experience.
f. Therefore, it is impossible to verify that a machine has mental capacities.

And then, if we can’t detect it, how can we create it?

f.
g. But in order to design a machine with property P, we must have some way of telling when P is present.
h. Therefore, it is impossible to design a machine with mental capacities.

Turing quotes a certain Professor Jefferson as saying, “No mechanism could feel (and not merely artificially signal, and easy contrivance) pleasure at its successes, grief when its valves fuse…” (op. cit.: 60) I have found a similar sentiment to be rampant in the general population. The ability to imagine an entity bodily and behaviourally similar to humans, but which lacks a sense of self-awareness, i.e. lacks subjective experience of the type that you and I enjoy, sadly, is something that I lack. A robot with sensory apparatus can feel, but it is supposed that it might be only feeling without actually “feeling like” something. I have a very hard time imagining such a thing, but there is no use in arguing over who’s imagination is closer to the truth.

Instead, what I want to attack is premise e, that verification of subjective experience is impossible. How do we know this? The only way we can observe the subjective experience of some other entity E first-hand is to actually be E. Since I am not E, I will never be able to know first-hand whether E has a subjective experience. Granted. The problem is in the idea that the only way to be certain that subjective mental experience exists is to observe it first-hand.

I believe that this idea is mostly founded on the principle of scientific induction. A man observes a limited number of minds in his lifetime (i.e. his own). From what he sees of them, he draws a number of general conclusions. They are observable, by his own first-hand experience. Naturally, he assumes that these are necessary properties of minds in general.9

But let us build a robot, give it an assortment of high-fidelity sensory apparatus and some complicated software for interpreting the data and integrating it with a knowledge base, give it a memory for past experiences, and let it meet Hofstadter’s criteria for intelligence over a very broad domain. I submit that such a contraption necessarily has a subjective experience, and that to imagine anything different is to entertain pure fantasy. After all, what is subjective experience but the feeling of being there? And is it not a contradiction in terms to speak of something “sensing without feeling”?

Now obviously, there is a range of possible levels of “quality” or “intensity” in subjective experience, directly related to the complexity and fidelity of the sensory apparatus, and to how effectively the subject is able to integrate, interpret, and thus understand the sensory data. This process may righly be called its “perception”. If, as a thought experiment, we were to remove your ability to perceive the world, including cutting off sensory input from within your own body so that you truly couldn’t feel anything (assuming we could do this without killing you), would you have any grounds for maintaining that you were still conscious? “Yes,” some would argue, “I would still have an inner life, as I would still be able to observe my own thoughts.” Perhaps, but would there be any separation between you and the rest of the world? Would you be self-conscious? “Yes,” some would still argue, “in my memory.” Ah, but what of when those memories lost their immediacy? I think now you would have a very hard time maintaining that you were self-conscious. Sensation-perception and sensory-perceptual memory are necessary for continuing self-consciousness, i.e. subjective experience or awareness. But more than this, they are also sufficient for continuing self-consciousness, by the observation that when we have taken them away from consciousness, nothing remains.

Final Remarks

All this implies that consciousness and mental capacities are not in fact the same thing, though they are often confused, and furthermore that it might be possible to have one and not the other. For example, consider a computer program capable of passing the Turing Test with flying colors, capable of meeting all of Hofstadter’s criteria for intelligence in a very large and complex domain. Then we would grant that this program has mental capacities. Yet if its only means of interraction with the world is via a dumb terminal, then its sensory-perceptual apparatus is very limited in scope, and our degree of confidence in saying that the program was conscious would be quite low. Alternately, consider a robot with an extremely complicated, varied and integrated sensory-perceptual system, and with a wide range of responsive behaviours, but which fails to exhibit any reasonable amount intelligence in the sense of Hofstadter’s criteria. Then we would be perfectly justified in describing this entity as conscious, but not as having mental capacities.10 Looked at this way, the Argument by Consciousness against machine intelligence is off the mark.

References

Chomsky, N. (1993) Language and Thought. Moyer Bell, Wakefield, 1997.

Gunderson, K. (1964) “The Imitation Game” Mind 73: pp. 234-45.

Hofstadter, D. (1979) Gödel, Escher, Bach: An Eternal Golden Braid. Vintage edition, New York, 1989.

Hofstadter, D. (1995) Fluid Concepts and Creative Analogies. BasicBooks.

Kurzweil, R. (1997) “When Will HAL Understand What We Are Saying? Computer Speech Recognition and Understanding”, HAL’s Legacy, D. Stork ed. The MIT Press, Cambridge MA: pp. 131-69.

Lenat, D. (1997) “From 2001 to 2001: Common Sense and the Mind of HAL”, HAL’s Legacy, D. Stork ed. The MIT Press, Cambridge MA: pp. 193-209.

Loebner, H. (1994) “In response”, Communications of the ACM, Vol. 37, No. 6: pp. 79-82.

Pinker, S. (1997) How the Mind Works. Norton & Co., New York.

Rosenfeld, A. (1997) “Eyes for Computers: How HAL Could ‘See'”, HAL’s Legacy, D. Stork ed. The MIT Press, Cambridge MA: pp. 211-35.

Shieber, S. (1994) “Lessons from a restricted Turing test”, Communications of the ACM, Vol. 37, No. 6: pp. 70-78.

Turing, A. (1950) “Computing Machinery and Intelligence”, reprinted in The Mind’s I, Hofstadter and Dennett, eds. Bantam, New York, 1988: pp. 53-67.

Notes

1 Actually, Turing considered the question, “Can machines think?” to be “too meaningless to deserve discussion.” He nevertheless predicted that by the end of the century, common usage of the word would be extended to include some machine activities. (1950: 47)

2 Turing suggested five minutes, but the exact length of time is not considered to be terribly important.

3 The full transcript is available on the web at <http://www.phm.gov.au/whatson/pc1.htm&gt;. More information about the Loebner Prize, including contest rules and transcripts of previous winning conversations, can be found at the official website http://www.loebner.net/Prizef/loebner-prize.html. Shieber (1994) explains how the 1991 competition was run, and comments on its efficacy.

4 It is not anticipated that the grand prize will be won any time soon. The current state of the art in speech recognition (see Kurzweil 1997) and computer vision technologies (see Rosenfeld 1997) are nowhere near human-level.

5 For example, “What in ‘12344321’ corresponds to ‘4’ in ‘1234554321’?” (Hofstadter 1995: 195)

6 For example, “I change efg into efw. Can you ‘do the same thing’ to ghi? (op. cit.: 202)

7 Examples of such software include Harold Cohen’s drawing program “Aaron”, and Chamberlain & Etter’s “Racter”, which writes original prose.

8 Specifically, the mind-body problem is the problem of establishing the exact nature of the relationship between “mental phenomena” and “physical phenomena”, where the two are assumed to be distinct. The other minds problem involves the seeming conundrum of being able to verify the existence of other minds, i.e. subjective experiences.

9 Turing might agree with me on this. ;-)

10 Most animals fall into this category.

Blogged with the Flock Browser

I’m currently working on a portfolio website for my dad’s paintings.  He gave me a ton of hi-res images that I needed to scale down to a maximum dimension of 400px.  I wrote this Gimp script to help automate the process:

scale-to-max.scm

(define (scale-to-max infile
                      outfile
                      newmax)
  (let* ((image (car (gimp-file-load RUN-NONINTERACTIVE infile infile)))
         (oldwidth (car (gimp-image-width image)))
         (oldheight (car (gimp-image-height image)))
         (oldmax (max oldwidth oldheight))
         (newwidth (round (/ (* oldwidth newmax) oldmax)))
         (newheight (round (/ (* oldheight newmax) oldmax))))
    (print (string-append "scale-to-max " infile " " (number->string newwidth) "x" (number->string newheight)))
    (gimp-image-scale image newwidth newheight)
    (let* ((drawable (car (gimp-image-get-active-layer image))))
      (gimp-file-save RUN-NONINTERACTIVE image drawable outfile outfile))))

It takes 3 parameters:

  1. Infile
  2. Outfile
  3. Newmax

To run it, put the file in your Gimp scripts folder and then run a command like this:
gimp -i -b '(scale-to-max "<path-to-original>" "<path-to-outfile>" 400)' -b '(gimp-quit 0)'

Blogged with the Flock Browser

Dominique Hazael-Massieux Last week Flock sponsored the W3C Workshop on the Future of Social Networking in Barcelona, and I had the honour of attending on behalf of the company.  The purpose of the 2-day workshop was to help determine what role the W3C should play, if any, in the emerging field of social networking services.  The event was chaired by Dominique Hazael-Massieux and Christine Perey.

About 80 people were in attendance, and 72 position papers were submitted beforehand to help provide some background and context for discussion.  That’s a lot of papers to read.  I got through most of them, but not all, prior to the workshop.  A nice short synopsis of each paper can be found here (and part two).  There was also some ongoing Twitter conversation during the workshop and the #w3csn topic was trending on Twitter for a while.

The breakdown of attendees was roughly one-third academics, one-third mobile industry, and one-third “other” including businesses involved to various degrees in social networking.  Conspicuously absent were representatives from any of the major social network operators; no one from Facebook, MySpace, Twitter, Flickr, etc.  (YouTube and Bebo submitted position papers, but did not give presentations and I didn’t notice them in attendance.  If someone from YouTube or Bebo was actually there, sorry I missed you!)  Geographic representation at the workshop was also predominantly European.

Day 1 – Thursday, January 15th

Presentations and discussion centered around “architectures for social networking”.  Many attendees (and indeed many of the submitted position papers) lamented the current “walled garden” model wherein social network operators are incentivised to closely guard their users, content and media.  Rather than moving towards an SNS monopoly or oligopoly, the SNS landscape is likely to become increasingly fragmented.  Despite the meteoric rise in usage of Facebook and MySpace in the last couple of years, other SNS operators are also seeing considerable growth.  In fact, someone noted that the “long tail” of regional/corporate/special interest social networks accounts for about 500 million users!  (Unfortunately I didn’t catch who said that or where the statistic came from.)  Ultimately, users would be better served by a model that allowed them to have data portability between networks, to manage fragmented identities (by either combining them or keeping them completely disconnected, as desired), and to use different providers for the services and specialties they provide (eg. LinkedIn for resumes, MySpace for music, etc…) without being tied to particular network operators.

There were two concurrent breakout sessions in the morning: one on Distributed Social Networking (which I attended) and the other on Data Mining.

The Distributed Social Networking session had some high quality discussion, but did not result in any concrete recommendations as to potential roles for the W3C.  Some architectures for distributed/decentralized social networking systems were discussed, and it is clear that the barrier to implementation is not a technical one.  Existing data format standards, protocols and APIs such as OpenID, OAuth, OpenSocial, FOAF and XMPP are sufficient to implement such systems, but they don’t address the business forces shaping the walled-garden problem that is the status quo.  That will require business model innovation, rather than technical, and as such isn’t really in the W3C’s realm.

The Data Mining breakout session apparently had more concrete results.  The recommendation was for W3C to define a standardized data interchange format as a harmonization/extension of existing formats such as FOAF, Atom, etc.  There were strong assertions from the academic crowd that RDF is the appropriate way to model social network data, but the idea of using it as an interchange format was generally poopooed due to its complexity.

After lunch, there were concurrent breakout sessions on Privacy and Trust (which I attended), and Distributed Architecture Business Models.

The Privacy and Trust session did not result in any strong recommendations for the W3C’s course of action.  The issue of identity fragmentation (ie., users having multiple profiles on multiple services) was discussed at length.  Some users consider it an inconvenience to manage multiple online identities, whereas others absolutely rely on it to maintain privacy — they don’t ever want their LinkedIn account to be associated with their MySpace account, for example.

The W3C’s existing P3P initiative was mentioned as needing to be extended in order to really be applicable for SNS sites.  Other than that, user education/awareness was cited as the main issue needing to be addressed.  One person said something to the effect of, “Wherever I go, it should be obvious to me what my current privacy and security context is.”

Blaine Cook Blaine Cook (of BT, formerly of Twitter) threw out an interesting idea about using capabilities-based cloud data stores so that users could maintain better access controls over their data, AND the data could not trivially be associated with the user unless they wanted it to be.  Worth some more thought.

The Distributed Architectures and Business Models breakout considered whether there are viable business models for a more distributed/decentralized model of online communities; in other words, can we escape the walled garden paradigm?  There was apparently some disagreement between web and mobile operators in these discussions.  Everyone agreed that selling in a social context has been shown to work.  A sizeable contingent further believe that a widely adopted micropayments system would go a long way towards enabling economies between SNS operators, rather than just within their closed communities.  In response to this, W3C may look at restarting its work on micropayment systems, which has been stagnant for a few years now.

Day 2 – Friday, January 16th

The presentations and most of the discussion on the second day of the workshop centered around the topic of “context and communities”.  That is, enriching social applications and online social interactions with contextual data such as location, relationship, engagement mode, etc.  There was a fair bit of discussion about geolocation and privacy controls for geodata.

Julian Pye Julian Pye of Vodafone gave an interesting presentation (paper, slides) on adapting user interface according to context.  For instance, automatically stemming the flow of friend activity updates according to my relationship with the contact, whether I’m at home, work, or school, on a holiday or a business trip, my proximity to the source, etc.

Simon Hay from the University of Cambridge got more than a few chuckles for his entertaining use of Harry Potter analogies in his presentation (paper, slides) on the use of sensor arrays tied to social networks.  They implemented something akin to the Marauder’s Map, which shows the location of every student on campus (or in the case of UC, just those students who chose to participate) and could notify you if, for example, several of your friends were heading down to the coffee shop.

There was also an interesting side-discussion about the validity of “Dunbar’s number” (usually cited as being 150) as the theoretical cognitive limit on the number of friends one can maintain.  Harry Halpin asserted that this number was fallaciously extrapolated from studies on primate sociology, and that it’s much more useful to think in terms of a graduated scale of 12 intimate friends, 150 frequent contacts, 1500 infrequent contacts and 1,500,00 lifetime contacts as averages for humans.

Outcomes

As mentioned above, suggestions for the W3C’s course of action include:

  • Put forward a recommendation for a data interchange format for social/identity data (as a harmonization/extension of existing data formats such as FOAF, etc.)
  • Look at extending P3P to be more applicable to SNS sites and communities.
  • Look at restarting Micropayments standardization work — there may be more business interest in this now than there was a few years ago.

In addition, Harry Halpin put forward a draft charter for a W3C Social Web Incubator Group to continue discussing standardization of the social web technology stack.

There was also a suggestion that a follow-up workshop be planned in 6 months’ time, probably in North America.

Update 2009-02-03: The official W3C report is now available on their site, and includes a couple of photos by yours truly. :)

Blogged with the Flock Browser

This weekend I got the latest version of Gimp running on my Mac, and started playing with some of the artistic filters.

I came across a combination that results in a look somewhat reminiscent of look of the movie A Scanner Darkly, and decided to write a Gimp script to apply this combination. Then I got curious about applying this to video, and ended up figuring out how to do that. The results are pretty cool looking.

  1. Split your video into frames at 30fps. I used iMovie ’08 to do this.
  2. Run the batch-scanner-gimply command in Gimp to batch-process all the frames. This can be done from the command-line, eg:
    /Applications/Gimp.app/Contents/Resources/bin/gimp -i -b
    '(batch-scanner-gimply "/Users/chris/Movies/Snorkeling/"
    "Snorkeling*.jpg" 17 50 75)' -b '(gimp-quit 0)'
  3. Use QuickTime 7 Pro to compose the processed frames back into a .mov file
  4. Bring it back into iMovie if you want to add the audio track back in. (Note that iMovie ’08 has hidden the “extract audio” feature, but it’s still there.)

Here’s the scanner-gimply.scm file that needs to go in your Gimp scripts folder:

(define (batch-scanner-gimply folder
                              filepattern
                              oilsize
                              edgeamount
                              edgeopacity)
  (let* ((filelist (cadr (file-glob (string-append folder filepattern) 1)))
         (dirlength (string-length folder)))
    (print "Start batch-scanner-gimply")
    (while (not (null? filelist))
           (let* ((filepath (car filelist))
                  (filename (substring filepath dirlength (string-length filepath)))
                  (image (car (gimp-file-load RUN-NONINTERACTIVE
                                              filepath filename)))
                  (baselayer (car (gimp-image-get-active-layer image)))
                  (oillayer (car (gimp-layer-copy baselayer FALSE))))
             (print (string-append "Processing " folder filename))
             (gimp-image-add-layer image oillayer -1)
             (plug-in-oilify RUN-NONINTERACTIVE image oillayer oilsize 1)
             (plug-in-edge RUN-NONINTERACTIVE image oillayer edgeamount 1 4)
             (gimp-layer-set-mode oillayer 21)
             (gimp-layer-set-opacity oillayer edgeopacity)
             (gimp-image-flatten image)
             (gimp-file-save RUN-NONINTERACTIVE
                             image (car (gimp-image-get-active-layer image))
                             filepath filename)
             (print (string-append "Saved " filename))
             (gimp-image-delete image))
           (set! filelist (cdr filelist)))))
Blogged with the Flock Browser