gerwin.klein at nicta.com.au
Sun Feb 6 23:14:53 CET 2011
On 07/02/2011, at 2:10 AM, Florian Haftmann wrote:
>> Is anyone prepared to leak Florian's new email address so I may haunt him?
> I am prepared to be haunted ;-), I am still on the users and dev mailing
> list and will follow them following the walrus strategy (only dive up
> instantly if you think it is necessary).
Works for me ;-)
>> haftmann at scarlatti:/data/tum/isabelle/blank/src/HOL/Word$ hg cat -r 14cabf5fa710 . | grep -oP '^(lemmas?|corollary|theorems?) [A-Za-z0-9_'\'']+' | sort > names1.txt
>> haftmann at scarlatti:/data/tum/isabelle/blank/src/HOL/Word$ hg cat -r 56e3520b68b2 . | grep -oP '^(lemmas?|corollary|theorems?) [A-Za-z0-9_'\'']+' | sort > names2.txt
>> haftmann at scarlatti:/data/tum/isabelle/blank/src/HOL/Word$ diff names1.txt names2.txt
>> < lemma bl_and_mask
>>> lemma bl_and_mask'
>> < lemma shiftl_0
>>> lemma shiftl_x_0
>> < lemma shiftr_0
>>> lemma shiftr_x_0
>> < lemmas unat_sub
>> < lemma uint_0
>>> lemma uint_eq_0
>> < lemma unat_sub_simple
>> < lemma word_less_sub1
>> < lemma word_le_sub1
> These should be the only lemma names affected.
Thanks, that should help. I wasn't sure that this was the only changeset in which something substantial happened, and was frustrated enough not to look further.
> I must confess that the changeset I have produced there really does not
> follow best Isabelle practice. So it is not clear to me whether these
> have been mere slips or intentional skips due to duplication. But
> giving the quick and dirty queries from above it should not be too
> difficult to reassure what case applies. Is this enough to help to
> solve your problem? Otherwise I will keep on tracking this, but this
> might take a while.
It's ok for now, I'll keep going and let you know if I hit anything larger.
In fact, so far the update has gone relatively smoothly. The main work as always is in updating the ML level proof tools, but that is expected.
>> Someone (Florian, I guess) thought it was a good idea to have "one unified Word theory" as the commit log says. I heartily disagree, but whatever.
> The motiviation for this changeset in the first place were the repeated
> pleas for adding code generation to the word type. Examining what was
> needed lead me to the conclusion that the previous division of the Word
> theories was largely artificial, in particular the very ancient practice
> to separate definitions from their corresponding lemmas into different
It started out that ancient way, and I'm sure some refactoring would have been useful, but mixing everything into one big theory doesn't help understanding. I'm not a big fan of arbitrarily mixing definitions and lemmas in any case. There are usually about one order of magnitude more lemmas which just makes the definitions disappear in the mess. Larger theories with sections make sense, but they shouldn't be 5k lines long.
> For me it was beyond doubt that a theory defining the word
> type should be called "Word", but this was already in use for the main
> entrance point, so I was also reluctant to further partition that
> unified Word theory.
Calling the toplevel theory Word wasn't my idea either, it used to be called WordMain and was changed in 2009-2 (where it then conflicted with Library/Word). I agree that the type and operation definitions should have gone into Word.thy. The rest of the development would be better off with some structure, though.
It's all not that important, I can live with how it is now, it would just have been nicer to discuss it beforehand.
> With some irony you could call this an instance of aspect-oriented
I guess there is the root of all evil :-)
> it is difficult to weave the aspect of code generation
> into a theory where it is beyond your scope what the original intentions
> have been. What always made me stumble is that parts of the Word
> theories (in particular Bit_Int) are so highly dependent on the
> accidental representation of int values as signed binary numerals; I
> always asked myself whether this modelling is really necessary or could
> be formulated in a more abstract fashion.
Yes, that may not have been the best choice and has already lead to major pain when numerals where updated a while back. The binary numerals are not that far off how fixed size words work (that's why Jeremy picked them), but there are a number of other representations which are shown isomorphic. Any of these could be the base level. Then again, it's already there and you'll want the connection to numerals at some point so the simplifier can do arithmetic on them.
I do appreciate the code generation for words, btw. Very nice to have.
More information about the isabelle-dev