initial version
cleaned-up the sources of the ITP course - remove internal notes - remove exercise solutions - remove KTH logo - add Creative Commons license
This commit is contained in:
commit
3c35cc25c3
17
.gitignore
vendored
Normal file
17
.gitignore
vendored
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
*.log
|
||||||
|
*~
|
||||||
|
*.toc
|
||||||
|
*.nav
|
||||||
|
*.ent
|
||||||
|
*.aux
|
||||||
|
*.out
|
||||||
|
*.snm
|
||||||
|
*.dvi
|
||||||
|
/lectures/pdfs/*
|
||||||
|
/questionnaire/questionnaire-simple.pdf
|
||||||
|
/questionnaire/questionnaire.pdf
|
||||||
|
/lectures/version.inc
|
||||||
|
/#*
|
||||||
|
/.~*
|
||||||
|
/lectures/tmp/*
|
||||||
|
*eps-converted-to.pdf
|
17
LICENSE
Normal file
17
LICENSE
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
The 'Interactive Theorem Proving Course'
|
||||||
|
by Thomas Tuerk (http://www.thomas-tuerk.de/en)
|
||||||
|
is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
|
||||||
|
To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4.0/.
|
||||||
|
|
||||||
|
Except where otherwise noted, the license covers all the course
|
||||||
|
material, including the slides, the exercise sheets and the
|
||||||
|
questionnaires.
|
||||||
|
|
||||||
|
One exception are the CC-logos in subdirectory 'lectures/images/cc',
|
||||||
|
which are covered by the Creative Commons Trademark Policy
|
||||||
|
(https://creativecommons.org/policies).
|
||||||
|
|
||||||
|
The Latex sources of the material as well as a few technical aids
|
||||||
|
(like Makefiles) are also provided. Please use these sources as you
|
||||||
|
see fit. However, if you use significant parts of the source files,
|
||||||
|
please publish your modified sources as well.
|
16
README.md
Normal file
16
README.md
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
Interactive Theorem Proving Course
|
||||||
|
==================================
|
||||||
|
|
||||||
|
This repository contains the sources for an _Interactive Thereom Proving Course_
|
||||||
|
that focuses on HOL 4. It was originally given by the PROSPER group at KTH in Stockholm in 2017.
|
||||||
|
|
||||||
|
There is a live version at https://hol-theorem-prover.org/hol-course-print.pdf.
|
||||||
|
|
||||||
|
Authors
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Thomas Tuerk (http://www.thomas-tuerk.de)
|
||||||
|
|
||||||
|
Copyright License
|
||||||
|
------------------
|
||||||
|
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
|
5
exercises/Makefile
Executable file
5
exercises/Makefile
Executable file
@ -0,0 +1,5 @@
|
|||||||
|
all:
|
||||||
|
|
||||||
|
clean:
|
||||||
|
rm -f *Theory.sig *Theory.sml *.ui *.uo \#* *.toc *.aux *.ps *.log *.lof *.bbl *.blg *.hix *.tid *.tde *.out *~
|
||||||
|
|
82
exercises/README
Normal file
82
exercises/README
Normal file
@ -0,0 +1,82 @@
|
|||||||
|
This directory contains exercises that were used during a ITP Course
|
||||||
|
at KTH in Stockholm in 2017 (see
|
||||||
|
https://www.kth.se/social/group/interactive-theorem-/). These
|
||||||
|
exercises are intended to accompany the slides of this course that has
|
||||||
|
been made publicly available.
|
||||||
|
|
||||||
|
When working on the exercises, you don't need to read to the end of
|
||||||
|
the whole sheet before starting to work on an exercise. However, I
|
||||||
|
highly recommend reading all subquestions first. Some are easier, if
|
||||||
|
the have already been considered while working on previous
|
||||||
|
parts. Often there are hints at the very end of an exercise sheet. The
|
||||||
|
intention is that you work on exercise first without these hints. If
|
||||||
|
you have trouble, they provide some help. Usually it is a valuable
|
||||||
|
learning experience thinking about what is explained by the hints. So,
|
||||||
|
I really recomment to first attempt the exercises without the hints
|
||||||
|
first.
|
||||||
|
|
||||||
|
|
||||||
|
There are the following exercise sheets:
|
||||||
|
|
||||||
|
|
||||||
|
0) Background Questionaire (before lecture started)
|
||||||
|
|
||||||
|
This was handed out before the lecture even started. It's intention
|
||||||
|
was to get a feeling for the background of the students. It was
|
||||||
|
expected that students on average are able to solve half of the
|
||||||
|
questions within 1 h.
|
||||||
|
|
||||||
|
|
||||||
|
1) Exercise 1 (very beginning of Course)
|
||||||
|
|
||||||
|
This exercise asks students to set up their HOL environment and
|
||||||
|
practise using SML. It was handed out at the very beginning of the
|
||||||
|
course and does not require any knowledge from the course.
|
||||||
|
|
||||||
|
|
||||||
|
2) Exercise 2 (after Part 6, i.e. after forward proofs)
|
||||||
|
|
||||||
|
Learn basic usage of HOL and emacs. How to construct terms, simple
|
||||||
|
forward proofs and simple proof automation.
|
||||||
|
|
||||||
|
|
||||||
|
3) Exercise 3 (after part 9, i.e. after induction proofs)
|
||||||
|
|
||||||
|
Play around with simple backward proofs.
|
||||||
|
|
||||||
|
|
||||||
|
4) Exercise 4 (after part 11, i.e. good definitions)
|
||||||
|
|
||||||
|
Some simple proofs and definitions. The challange is how to structure your
|
||||||
|
proofs nicely. Moreover, this exercise requires some SML programming and
|
||||||
|
connects proofs and SML execution.
|
||||||
|
|
||||||
|
|
||||||
|
5) Exercise 5 (after part 12, i.e. deep/shallow embeddings, knowledge about simplifier from part 13 useful)
|
||||||
|
|
||||||
|
This exercise focuses on the effect of different definitions on
|
||||||
|
proofs. Moreover, more so than in exercise 4 students are required to
|
||||||
|
structure their development by defining own auxiliary definitions and
|
||||||
|
lemmata. Some proof ideas are, while still rather simple, not trivial.
|
||||||
|
This exercise can be solved without using the simplifier. However, the
|
||||||
|
simplifier can help a lot. Similarlish it is encouraged to really
|
||||||
|
learn how to use Metis for this exercise.
|
||||||
|
|
||||||
|
|
||||||
|
6) Exercise 6 (final project, after part 13, simplifier)
|
||||||
|
|
||||||
|
For organisational reasons, the final project was presented in
|
||||||
|
exercise 6, i.e. before exercise 7 and the end of the course. It
|
||||||
|
requires people to learn about part of HOL themselves, do a non
|
||||||
|
trivial formalisation and come up with some non trivial proofs.
|
||||||
|
Exercise 6 is intentend to take 3-4 times as much time as the other
|
||||||
|
exercises.
|
||||||
|
|
||||||
|
|
||||||
|
7) Exercise 7 (after part 14, advanced definitions)
|
||||||
|
|
||||||
|
Some exercises about advanced usages of the simplifier and how to use
|
||||||
|
inductive relations. It is very short, since people were in parallel
|
||||||
|
working already on their final project.
|
||||||
|
|
||||||
|
|
39
exercises/a2.dot
Normal file
39
exercises/a2.dot
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
digraph G {
|
||||||
|
node_0 [label="0: '0'"]
|
||||||
|
node_2 [label="2: -"]
|
||||||
|
node_6 [label="6: '2'"]
|
||||||
|
node_10 [label="10: -"]
|
||||||
|
node_18 [label="18: '6'"]
|
||||||
|
node_10 -> node_18 [label="r"]
|
||||||
|
node_6 -> node_10 [label="r"]
|
||||||
|
node_2 -> node_6 [label="l"]
|
||||||
|
node_4 [label="4: -"]
|
||||||
|
node_12 [label="12: '4'"]
|
||||||
|
node_4 -> node_12 [label="l"]
|
||||||
|
node_8 [label="8: -"]
|
||||||
|
node_24 [label="24: '8'"]
|
||||||
|
node_8 -> node_24 [label="l"]
|
||||||
|
node_4 -> node_8 [label="r"]
|
||||||
|
node_2 -> node_4 [label="r"]
|
||||||
|
node_0 -> node_2 [label="l"]
|
||||||
|
node_1 [label="1: -"]
|
||||||
|
node_5 [label="5: -"]
|
||||||
|
node_13 [label="13: -"]
|
||||||
|
node_21 [label="21: '7'"]
|
||||||
|
node_13 -> node_21 [label="r"]
|
||||||
|
node_5 -> node_13 [label="l"]
|
||||||
|
node_9 [label="9: '3'"]
|
||||||
|
node_5 -> node_9 [label="r"]
|
||||||
|
node_1 -> node_5 [label="l"]
|
||||||
|
node_3 [label="3: '1'"]
|
||||||
|
node_11 [label="11: -"]
|
||||||
|
node_27 [label="27: '9'"]
|
||||||
|
node_11 -> node_27 [label="l"]
|
||||||
|
node_3 -> node_11 [label="l"]
|
||||||
|
node_7 [label="7: -"]
|
||||||
|
node_15 [label="15: '5'"]
|
||||||
|
node_7 -> node_15 [label="r"]
|
||||||
|
node_3 -> node_7 [label="r"]
|
||||||
|
node_1 -> node_3 [label="r"]
|
||||||
|
node_0 -> node_1 [label="r"]
|
||||||
|
}
|
781
exercises/a2.eps
Normal file
781
exercises/a2.eps
Normal file
@ -0,0 +1,781 @@
|
|||||||
|
%!PS-Adobe-3.0 EPSF-3.0
|
||||||
|
%%Creator: graphviz version 2.38.0 (20140413.2041)
|
||||||
|
%%Title: G
|
||||||
|
%%Pages: 1
|
||||||
|
%%BoundingBox: 36 36 559 428
|
||||||
|
%%EndComments
|
||||||
|
save
|
||||||
|
%%BeginProlog
|
||||||
|
/DotDict 200 dict def
|
||||||
|
DotDict begin
|
||||||
|
|
||||||
|
/setupLatin1 {
|
||||||
|
mark
|
||||||
|
/EncodingVector 256 array def
|
||||||
|
EncodingVector 0
|
||||||
|
|
||||||
|
ISOLatin1Encoding 0 255 getinterval putinterval
|
||||||
|
EncodingVector 45 /hyphen put
|
||||||
|
|
||||||
|
% Set up ISO Latin 1 character encoding
|
||||||
|
/starnetISO {
|
||||||
|
dup dup findfont dup length dict begin
|
||||||
|
{ 1 index /FID ne { def }{ pop pop } ifelse
|
||||||
|
} forall
|
||||||
|
/Encoding EncodingVector def
|
||||||
|
currentdict end definefont
|
||||||
|
} def
|
||||||
|
/Times-Roman starnetISO def
|
||||||
|
/Times-Italic starnetISO def
|
||||||
|
/Times-Bold starnetISO def
|
||||||
|
/Times-BoldItalic starnetISO def
|
||||||
|
/Helvetica starnetISO def
|
||||||
|
/Helvetica-Oblique starnetISO def
|
||||||
|
/Helvetica-Bold starnetISO def
|
||||||
|
/Helvetica-BoldOblique starnetISO def
|
||||||
|
/Courier starnetISO def
|
||||||
|
/Courier-Oblique starnetISO def
|
||||||
|
/Courier-Bold starnetISO def
|
||||||
|
/Courier-BoldOblique starnetISO def
|
||||||
|
cleartomark
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
%%BeginResource: procset graphviz 0 0
|
||||||
|
/coord-font-family /Times-Roman def
|
||||||
|
/default-font-family /Times-Roman def
|
||||||
|
/coordfont coord-font-family findfont 8 scalefont def
|
||||||
|
|
||||||
|
/InvScaleFactor 1.0 def
|
||||||
|
/set_scale {
|
||||||
|
dup 1 exch div /InvScaleFactor exch def
|
||||||
|
scale
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
% styles
|
||||||
|
/solid { [] 0 setdash } bind def
|
||||||
|
/dashed { [9 InvScaleFactor mul dup ] 0 setdash } bind def
|
||||||
|
/dotted { [1 InvScaleFactor mul 6 InvScaleFactor mul] 0 setdash } bind def
|
||||||
|
/invis {/fill {newpath} def /stroke {newpath} def /show {pop newpath} def} bind def
|
||||||
|
/bold { 2 setlinewidth } bind def
|
||||||
|
/filled { } bind def
|
||||||
|
/unfilled { } bind def
|
||||||
|
/rounded { } bind def
|
||||||
|
/diagonals { } bind def
|
||||||
|
/tapered { } bind def
|
||||||
|
|
||||||
|
% hooks for setting color
|
||||||
|
/nodecolor { sethsbcolor } bind def
|
||||||
|
/edgecolor { sethsbcolor } bind def
|
||||||
|
/graphcolor { sethsbcolor } bind def
|
||||||
|
/nopcolor {pop pop pop} bind def
|
||||||
|
|
||||||
|
/beginpage { % i j npages
|
||||||
|
/npages exch def
|
||||||
|
/j exch def
|
||||||
|
/i exch def
|
||||||
|
/str 10 string def
|
||||||
|
npages 1 gt {
|
||||||
|
gsave
|
||||||
|
coordfont setfont
|
||||||
|
0 0 moveto
|
||||||
|
(\() show i str cvs show (,) show j str cvs show (\)) show
|
||||||
|
grestore
|
||||||
|
} if
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
/set_font {
|
||||||
|
findfont exch
|
||||||
|
scalefont setfont
|
||||||
|
} def
|
||||||
|
|
||||||
|
% draw text fitted to its expected width
|
||||||
|
/alignedtext { % width text
|
||||||
|
/text exch def
|
||||||
|
/width exch def
|
||||||
|
gsave
|
||||||
|
width 0 gt {
|
||||||
|
[] 0 setdash
|
||||||
|
text stringwidth pop width exch sub text length div 0 text ashow
|
||||||
|
} if
|
||||||
|
grestore
|
||||||
|
} def
|
||||||
|
|
||||||
|
/boxprim { % xcorner ycorner xsize ysize
|
||||||
|
4 2 roll
|
||||||
|
moveto
|
||||||
|
2 copy
|
||||||
|
exch 0 rlineto
|
||||||
|
0 exch rlineto
|
||||||
|
pop neg 0 rlineto
|
||||||
|
closepath
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
/ellipse_path {
|
||||||
|
/ry exch def
|
||||||
|
/rx exch def
|
||||||
|
/y exch def
|
||||||
|
/x exch def
|
||||||
|
matrix currentmatrix
|
||||||
|
newpath
|
||||||
|
x y translate
|
||||||
|
rx ry scale
|
||||||
|
0 0 1 0 360 arc
|
||||||
|
setmatrix
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
/endpage { showpage } bind def
|
||||||
|
/showpage { } def
|
||||||
|
|
||||||
|
/layercolorseq
|
||||||
|
[ % layer color sequence - darkest to lightest
|
||||||
|
[0 0 0]
|
||||||
|
[.2 .8 .8]
|
||||||
|
[.4 .8 .8]
|
||||||
|
[.6 .8 .8]
|
||||||
|
[.8 .8 .8]
|
||||||
|
]
|
||||||
|
def
|
||||||
|
|
||||||
|
/layerlen layercolorseq length def
|
||||||
|
|
||||||
|
/setlayer {/maxlayer exch def /curlayer exch def
|
||||||
|
layercolorseq curlayer 1 sub layerlen mod get
|
||||||
|
aload pop sethsbcolor
|
||||||
|
/nodecolor {nopcolor} def
|
||||||
|
/edgecolor {nopcolor} def
|
||||||
|
/graphcolor {nopcolor} def
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
/onlayer { curlayer ne {invis} if } def
|
||||||
|
|
||||||
|
/onlayers {
|
||||||
|
/myupper exch def
|
||||||
|
/mylower exch def
|
||||||
|
curlayer mylower lt
|
||||||
|
curlayer myupper gt
|
||||||
|
or
|
||||||
|
{invis} if
|
||||||
|
} def
|
||||||
|
|
||||||
|
/curlayer 0 def
|
||||||
|
|
||||||
|
%%EndResource
|
||||||
|
%%EndProlog
|
||||||
|
%%BeginSetup
|
||||||
|
14 default-font-family set_font
|
||||||
|
1 setmiterlimit
|
||||||
|
% /arrowlength 10 def
|
||||||
|
% /arrowwidth 5 def
|
||||||
|
|
||||||
|
% make sure pdfmark is harmless for PS-interpreters other than Distiller
|
||||||
|
/pdfmark where {pop} {userdict /pdfmark /cleartomark load put} ifelse
|
||||||
|
% make '<<' and '>>' safe on PS Level 1 devices
|
||||||
|
/languagelevel where {pop languagelevel}{1} ifelse
|
||||||
|
2 lt {
|
||||||
|
userdict (<<) cvn ([) cvn load put
|
||||||
|
userdict (>>) cvn ([) cvn load put
|
||||||
|
} if
|
||||||
|
|
||||||
|
%%EndSetup
|
||||||
|
setupLatin1
|
||||||
|
%%Page: 1 1
|
||||||
|
%%PageBoundingBox: 36 36 559 428
|
||||||
|
%%PageOrientation: Portrait
|
||||||
|
0 0 1 beginpage
|
||||||
|
gsave
|
||||||
|
36 36 523 392 boxprim clip newpath
|
||||||
|
1 1 set_scale 0 rotate 40 40 translate
|
||||||
|
% node_0
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
245.85 366 27.1 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
232.85 362.3 moveto 26 (0: '0') alignedtext
|
||||||
|
grestore
|
||||||
|
% node_2
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
185.85 279 27 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
176.35 275.3 moveto 19 (2: -) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_0->node_2
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 234.84 349.41 moveto
|
||||||
|
225.88 336.71 213.07 318.56 202.79 304.01 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 205.41 301.64 moveto
|
||||||
|
196.78 295.49 lineto
|
||||||
|
199.69 305.68 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 205.41 301.64 moveto
|
||||||
|
196.78 295.49 lineto
|
||||||
|
199.69 305.68 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
219.85 318.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_1
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
287.85 279 27 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
278.35 275.3 moveto 19 (1: -) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_0->node_1
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 253.95 348.61 moveto
|
||||||
|
259.99 336.38 268.36 319.44 275.27 305.46 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 278.48 306.85 moveto
|
||||||
|
279.77 296.34 lineto
|
||||||
|
272.21 303.75 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 278.48 306.85 moveto
|
||||||
|
279.77 296.34 lineto
|
||||||
|
272.21 303.75 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
269.85 318.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_6
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
85.85 192 27.1 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
72.85 188.3 moveto 26 (6: '2') alignedtext
|
||||||
|
grestore
|
||||||
|
% node_2->node_6
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 169.78 264.34 moveto
|
||||||
|
153.52 250.53 128.21 229.01 109.63 213.21 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 111.76 210.43 moveto
|
||||||
|
101.87 206.62 lineto
|
||||||
|
107.23 215.77 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 111.76 210.43 moveto
|
||||||
|
101.87 206.62 lineto
|
||||||
|
107.23 215.77 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
141.85 231.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_4
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
185.85 192 27 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
176.35 188.3 moveto 19 (4: -) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_2->node_4
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 185.85 260.8 moveto
|
||||||
|
185.85 249.16 185.85 233.55 185.85 220.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 189.35 220.18 moveto
|
||||||
|
185.85 210.18 lineto
|
||||||
|
182.35 220.18 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 189.35 220.18 moveto
|
||||||
|
185.85 210.18 lineto
|
||||||
|
182.35 220.18 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
185.85 231.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_10
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
31.85 105 27.1 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
18.85 101.3 moveto 26 (10: -) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_6->node_10
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 75.69 175.01 moveto
|
||||||
|
67.77 162.55 56.63 145.01 47.56 130.74 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 50.28 128.48 moveto
|
||||||
|
41.96 121.92 lineto
|
||||||
|
44.37 132.23 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 50.28 128.48 moveto
|
||||||
|
41.96 121.92 lineto
|
||||||
|
44.37 132.23 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
62.85 144.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_18
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
31.85 18 31.7 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
15.35 14.3 moveto 33 (18: '6') alignedtext
|
||||||
|
grestore
|
||||||
|
% node_10->node_18
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 31.85 86.8 moveto
|
||||||
|
31.85 75.16 31.85 59.55 31.85 46.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 35.35 46.18 moveto
|
||||||
|
31.85 36.18 lineto
|
||||||
|
28.35 46.18 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 35.35 46.18 moveto
|
||||||
|
31.85 36.18 lineto
|
||||||
|
28.35 46.18 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
31.85 57.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_12
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
108.85 105 31.7 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
92.35 101.3 moveto 33 (12: '4') alignedtext
|
||||||
|
grestore
|
||||||
|
% node_4->node_12
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 172.43 176.19 moveto
|
||||||
|
160.62 163.15 143.22 143.95 129.61 128.92 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 132.19 126.56 moveto
|
||||||
|
122.89 121.5 lineto
|
||||||
|
127.01 131.26 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 132.19 126.56 moveto
|
||||||
|
122.89 121.5 lineto
|
||||||
|
127.01 131.26 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
152.85 144.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_8
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
185.85 105 27 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
176.35 101.3 moveto 19 (8: -) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_4->node_8
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 185.85 173.8 moveto
|
||||||
|
185.85 162.16 185.85 146.55 185.85 133.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 189.35 133.18 moveto
|
||||||
|
185.85 123.18 lineto
|
||||||
|
182.35 133.18 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 189.35 133.18 moveto
|
||||||
|
185.85 123.18 lineto
|
||||||
|
182.35 133.18 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
185.85 144.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_24
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
180.85 18 31.7 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
164.35 14.3 moveto 33 (24: '8') alignedtext
|
||||||
|
grestore
|
||||||
|
% node_8->node_24
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 184.84 86.8 moveto
|
||||||
|
184.15 75.16 183.23 59.55 182.45 46.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 185.94 45.95 moveto
|
||||||
|
181.86 36.18 lineto
|
||||||
|
178.95 46.36 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 185.94 45.95 moveto
|
||||||
|
181.86 36.18 lineto
|
||||||
|
178.95 46.36 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
183.85 57.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_5
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
287.85 192 27 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
278.35 188.3 moveto 19 (5: -) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_1->node_5
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 287.85 260.8 moveto
|
||||||
|
287.85 249.16 287.85 233.55 287.85 220.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 291.35 220.18 moveto
|
||||||
|
287.85 210.18 lineto
|
||||||
|
284.35 220.18 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 291.35 220.18 moveto
|
||||||
|
287.85 210.18 lineto
|
||||||
|
284.35 220.18 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
287.85 231.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_3
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
403.85 192 27.1 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
390.85 188.3 moveto 26 (3: '1') alignedtext
|
||||||
|
grestore
|
||||||
|
% node_1->node_3
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 305.47 265.09 moveto
|
||||||
|
324.71 250.99 355.74 228.25 377.8 212.09 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 380.07 214.77 moveto
|
||||||
|
386.06 206.03 lineto
|
||||||
|
375.93 209.12 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 380.07 214.77 moveto
|
||||||
|
386.06 206.03 lineto
|
||||||
|
375.93 209.12 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
352.85 231.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_13
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
257.85 105 27.1 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
244.85 101.3 moveto 26 (13: -) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_5->node_13
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 281.92 174.21 moveto
|
||||||
|
277.73 162.33 272.02 146.17 267.22 132.56 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 270.39 131.02 moveto
|
||||||
|
263.76 122.76 lineto
|
||||||
|
263.79 133.35 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 270.39 131.02 moveto
|
||||||
|
263.76 122.76 lineto
|
||||||
|
263.79 133.35 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
274.85 144.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_9
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
330.85 105 27.1 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
317.85 101.3 moveto 26 (9: '3') alignedtext
|
||||||
|
grestore
|
||||||
|
% node_5->node_9
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 296.14 174.61 moveto
|
||||||
|
302.32 162.38 310.89 145.44 317.97 131.46 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 321.19 132.84 moveto
|
||||||
|
322.58 122.34 lineto
|
||||||
|
314.94 129.68 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 321.19 132.84 moveto
|
||||||
|
322.58 122.34 lineto
|
||||||
|
314.94 129.68 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
312.85 144.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_21
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
262.85 18 31.7 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
246.35 14.3 moveto 33 (21: '7') alignedtext
|
||||||
|
grestore
|
||||||
|
% node_13->node_21
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 258.86 86.8 moveto
|
||||||
|
259.54 75.16 260.46 59.55 261.25 46.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 264.74 46.36 moveto
|
||||||
|
261.84 36.18 lineto
|
||||||
|
257.76 45.95 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 264.74 46.36 moveto
|
||||||
|
261.84 36.18 lineto
|
||||||
|
257.76 45.95 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
260.85 57.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_11
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
403.85 105 27.1 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
390.85 101.3 moveto 26 (11: -) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_3->node_11
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 403.85 173.8 moveto
|
||||||
|
403.85 162.16 403.85 146.55 403.85 133.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 407.35 133.18 moveto
|
||||||
|
403.85 123.18 lineto
|
||||||
|
400.35 133.18 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 407.35 133.18 moveto
|
||||||
|
403.85 123.18 lineto
|
||||||
|
400.35 133.18 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
403.85 144.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_7
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
480.85 105 27 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
471.35 101.3 moveto 19 (7: -) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_3->node_7
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 417.26 176.19 moveto
|
||||||
|
429.22 162.99 446.91 143.47 460.59 128.36 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 463.21 130.68 moveto
|
||||||
|
467.33 120.92 lineto
|
||||||
|
458.02 125.98 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 463.21 130.68 moveto
|
||||||
|
467.33 120.92 lineto
|
||||||
|
458.02 125.98 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
447.85 144.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_27
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
400.85 18 31.7 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
384.35 14.3 moveto 33 (27: '9') alignedtext
|
||||||
|
grestore
|
||||||
|
% node_11->node_27
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 403.24 86.8 moveto
|
||||||
|
402.83 75.16 402.28 59.55 401.81 46.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 405.3 46.05 moveto
|
||||||
|
401.45 36.18 lineto
|
||||||
|
398.31 46.29 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 405.3 46.05 moveto
|
||||||
|
401.45 36.18 lineto
|
||||||
|
398.31 46.29 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
402.85 57.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_15
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
482.85 18 31.7 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
466.35 14.3 moveto 33 (15: '5') alignedtext
|
||||||
|
grestore
|
||||||
|
% node_7->node_15
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 481.25 86.8 moveto
|
||||||
|
481.53 75.16 481.89 59.55 482.21 46.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 485.71 46.26 moveto
|
||||||
|
482.44 36.18 lineto
|
||||||
|
478.71 46.09 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 485.71 46.26 moveto
|
||||||
|
482.44 36.18 lineto
|
||||||
|
478.71 46.09 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
482.85 57.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
endpage
|
||||||
|
showpage
|
||||||
|
grestore
|
||||||
|
%%PageTrailer
|
||||||
|
%%EndPage: 1
|
||||||
|
%%Trailer
|
||||||
|
end
|
||||||
|
restore
|
||||||
|
%%EOF
|
121
exercises/e1.tex
Normal file
121
exercises/e1.tex
Normal file
@ -0,0 +1,121 @@
|
|||||||
|
\documentclass[a4paper,10pt,oneside]{scrartcl}
|
||||||
|
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
\usepackage[a4paper]{geometry}
|
||||||
|
\usepackage{hyperref}
|
||||||
|
\usepackage{url}
|
||||||
|
\usepackage{color}
|
||||||
|
\usepackage{amsfonts}
|
||||||
|
\input{../hol_commands.inc}
|
||||||
|
|
||||||
|
\title{Exercise 1}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
\begin{center}
|
||||||
|
\usekomafont{sectioning}\usekomafont{part}ITP Exercise 1
|
||||||
|
\webversion{}{\\\small{due Friday 28th April}}
|
||||||
|
\end{center}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
|
||||||
|
\section{Setting up Environment}
|
||||||
|
|
||||||
|
We will use the HOL theorem prover\footnote{\url{https://hol-theorem-prover.org}}.
|
||||||
|
For the practical sessions you will need to be able to use HOL on your own machine. Therefore, please set up the following software.
|
||||||
|
|
||||||
|
\subsection{Standard ML}
|
||||||
|
|
||||||
|
You will need to have Standard ML available. Please install PolyML 5.6\footnote{webpage \url{http://www.polyml.org}}\footnote{download link \url{https://github.com/polyml/polyml/releases/tag/v5.6}} or later.
|
||||||
|
|
||||||
|
\subsection{HOL}
|
||||||
|
|
||||||
|
Please install a recent version of the HOL theorem prover. I recommend installing the most recent version from the git repository. If for some reason you don't want to do this, the latest release should be fine as well. Installation instructions can be found on HOL's webpage\footnote{see \url{https://hol-theorem-prover.org/\#get}}.
|
||||||
|
|
||||||
|
\subsection{Emacs}
|
||||||
|
|
||||||
|
In the lecture GNU Emacs\footnote{https://www.gnu.org/software/emacs/} will be used as a user-interface. Please install a recent version of Emacs. Please make sure you use Emacs and not XEmacs.
|
||||||
|
|
||||||
|
\subsection{HOL-Mode and SML mode}
|
||||||
|
We will use the \emph{hol-mode} for Emacs. It is distributed with HOL, but needs setting up in Emacs. Please set it up and familiarise yourself with its basic usage. Documentation can be found on HOL's webpage\footnote{see \url{https://hol-theorem-prover.org/hol-mode.html}}.
|
||||||
|
We will write SML programs all the time. Please install the SML mode\footnote{\url{https://elpa.gnu.org/packages/sml-mode.html}} to enable syntax highlighting for ML in Emacs.
|
||||||
|
Information on both the SML and the HOL mode can also be found in HOL's interaction
|
||||||
|
manual\footnote{\url{https://hol-theorem-prover.org/HOL-interaction.pdf}}.
|
||||||
|
|
||||||
|
|
||||||
|
\section{SML}
|
||||||
|
|
||||||
|
Let's refresh our knowledge about Standard ML. Moreover, these
|
||||||
|
exercises are aimed at getting familiar with Emacs and using the HOL
|
||||||
|
mode. So, please use Emacs with HOL mode as a user-interface and treat
|
||||||
|
HOL like a ML REPL.
|
||||||
|
|
||||||
|
To learn more about the Emacs mode, you can have a look at the HOL interaction manual.
|
||||||
|
If you need a brush-up on SML syntax, I recommend reading something compact like
|
||||||
|
\url{https://learnxinyminutes.com/docs/standard-ml/}. If you need more, the book
|
||||||
|
\emph{ML for the Working Programmer} by Prof.\ Larry Paulson is a good introduction.
|
||||||
|
|
||||||
|
\subsection{Our Own Lists}
|
||||||
|
|
||||||
|
Of course SML comes with a decent list library. However, as an exercise implement your own list datatype and implement the following list operations for your own datatype:
|
||||||
|
%
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{length}
|
||||||
|
\item \texttt{append} (\texttt{@})
|
||||||
|
\item \texttt{rev}
|
||||||
|
\item \texttt{revAppend}
|
||||||
|
\item \texttt{exists}
|
||||||
|
\end{itemize}
|
||||||
|
%
|
||||||
|
If you don't know what these functions should do, you can find documentation of the Standard ML Basis Library at e.\,g.\ \url{http://sml-family.org}.
|
||||||
|
|
||||||
|
In addition implement a function \texttt{replicate\ :\ 'a -> int -> 'a list}, which is supposed to
|
||||||
|
construct a list of the given length that only contains the given element. For example \texttt{replicate "a" 3} should return the list \texttt{["a", "a", "a"]}.
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item Prove with pen and paper that for your implementation
|
||||||
|
\texttt{append l [] = l} holds for all \texttt{l}.
|
||||||
|
\item Similarly, prove \texttt{$\forall$l1 l2.\ length (append l1 l2) = length l1 + length l2}.
|
||||||
|
\item There are strong connections between \texttt{append}, \texttt{revAppend} and \texttt{rev}.
|
||||||
|
One can for example define \texttt{revAppend} by \texttt{revAppend l1 l2 = append (rev l1) l2}.
|
||||||
|
Write down similar definitions for \texttt{rev} and \texttt{append} using only \texttt{revAppend}.
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
\subsection{Making Change}
|
||||||
|
|
||||||
|
In the following, let's use the standard list library again. Write a
|
||||||
|
program that given the coins and notes you have in your wallet, lists
|
||||||
|
\emph{all} the possible ways to pay a certain amount. Represent the
|
||||||
|
coins you have as a list of integers. If a number occurs twice in
|
||||||
|
this list, you have two coins with this value. The result should be
|
||||||
|
returned in the form of a list of lists. For simplicity, the output
|
||||||
|
may contain duplicates.
|
||||||
|
|
||||||
|
An example implementation of the function
|
||||||
|
\texttt{make\_change\ :\ int list -> int -> int list list} should
|
||||||
|
shows for example the following outputs. Notice however, that the output of
|
||||||
|
your implementation is allowed to contain duplicates and use a different
|
||||||
|
order of the lists.
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{make\_change [5,2,2,1,1,1] 6 =}\\
|
||||||
|
\-\texttt{\ \ \ [[5, 1], [2, 2, 1, 1]]}
|
||||||
|
\item \texttt{make\_change [5,2,2,1,1,1] 15 = []}
|
||||||
|
\item \texttt{make\_change [10,5,5,5,2,2,1,1,1] 10 =}\\
|
||||||
|
\-\texttt{\ \ \ [[10], [5, 5], [5, 2, 2, 1], [5, 2, 1, 1, 1]]}
|
||||||
|
\end{itemize}
|
||||||
|
\bigskip
|
||||||
|
Write down as formally as you can some properties of \texttt{make\_change}. An example
|
||||||
|
property is
|
||||||
|
|
||||||
|
\begin{center}\texttt{
|
||||||
|
$\forall$cs n. n > sum cs $\Longrightarrow$ make\_change cs n = []}
|
||||||
|
\end{center}
|
||||||
|
where \texttt{sum} is defined by \texttt{val sum = foldl (op+) 0} and we assume that \texttt{cs} contains no number less than 0.
|
||||||
|
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: t
|
||||||
|
%%% End:
|
166
exercises/e2.tex
Normal file
166
exercises/e2.tex
Normal file
@ -0,0 +1,166 @@
|
|||||||
|
\documentclass[a4paper,10pt,oneside]{scrartcl}
|
||||||
|
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
\usepackage[a4paper]{geometry}
|
||||||
|
\usepackage{hyperref}
|
||||||
|
\usepackage{url}
|
||||||
|
\usepackage{color}
|
||||||
|
\usepackage{amsfonts}
|
||||||
|
|
||||||
|
\input{../hol_commands.inc}
|
||||||
|
\title{Exercise 2}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\usekomafont{sectioning}\usekomafont{part}ITP Exercise 2
|
||||||
|
\webversion{}{\\\small{due Friday 5th May}}
|
||||||
|
\end{center}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
|
||||||
|
\section{Self-Study}
|
||||||
|
|
||||||
|
\subsection{Emacs}
|
||||||
|
If you don't know Emacs well, familiarise yourself with its basic usage. Learn the key combinations for common operations like
|
||||||
|
opening a file, saving current buffer, closing buffer, switching between buffers, searching in a file, copy and paste text etc.
|
||||||
|
You might consider printing the \emph{Emacs Reference Card}\footnote{\url{https://www.gnu.org/software/emacs/refcards/pdf/refcard.pdf}} and putting
|
||||||
|
it next to your computer.
|
||||||
|
|
||||||
|
\subsection{HOL Documentation}
|
||||||
|
Familiarise yourself with how to get help about HOL.
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item Build the various documentations in directory \texttt{Manual}. For this, call \texttt{make} in directory \texttt{HOL-HOME/Manual}. For building the manuals, \texttt{hol} and \texttt{Holmake} need to run.
|
||||||
|
Therefore make sure, \texttt{HOL-HOME/bin} is in your PATH.
|
||||||
|
\item Have a brief look at the various manuals in order to understand where which kind of information
|
||||||
|
can be found.
|
||||||
|
\item The lectures will cover the logic foundations of the HOL
|
||||||
|
theorem prover only very briefly and lightly. If you are interested in more
|
||||||
|
details, have a look at the \emph{Logic} manual. This is purely optional.
|
||||||
|
\item Familiarise yourself with the different ways to access the reference manual.
|
||||||
|
As an example read up on \texttt{MATCH\_MP} in the HTML reference manual, the PDF reference manual and the in-system help (type \texttt{help "MATCH\_MP"}).
|
||||||
|
\item Familiarise yourself with the different printing switches of HOL, in particular the ones in hol-mode's menu. Learn how to turn Unicode-output on/off, how to show assumptions of theorems and how to show type annotations.
|
||||||
|
\item Use \texttt{DB.match} and \texttt{DB.find} to find theorems stating \verb#A /\ A = A#. Use both the emacs-mode and the SML REPL. Look at the interface of \texttt{DB}.
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\subsection{Holmake}
|
||||||
|
Learn about \texttt{Holmake} by reading description manual sections 7.3 - 7.3.4.
|
||||||
|
|
||||||
|
\subsection{Constructing Terms and Forward Proofs}
|
||||||
|
|
||||||
|
To deepen the knowledge about how to construct terms, how to program in HOL and how to perform
|
||||||
|
forward proofs, please look at the following HOL modules: \ml{FinalThm.sml}, \ml{FinalTerm.sml},
|
||||||
|
\ml{FinalType.sml}, \ml{Drule.sig}, \ml{Psyntax.sig}, \ml{boolSyntax.sig}, \ml{Lib.sig}.
|
||||||
|
|
||||||
|
|
||||||
|
\section{Terms}
|
||||||
|
|
||||||
|
\subsection{Free and Bound Vars}
|
||||||
|
|
||||||
|
List the free variables of the following terms:
|
||||||
|
\begin{itemize}
|
||||||
|
\item \verb#(\x. 2 + (7 * x) + y) z#
|
||||||
|
\item \verb#x + y + 2#
|
||||||
|
\item \verb#!x. x + 1 > x#
|
||||||
|
\item \verb#?x. x = y + 2#
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\subsection{Alpha Equivalence}
|
||||||
|
|
||||||
|
Are the following pairs of terms alpha-equivalent? A simple mark on the sheet is a sufficient answer. Also take two colors and mark all occurences of free vars in one color and all occourences of bound vars in the other. Assume that \texttt{x}, \texttt{y}, \texttt{z}, \texttt{a} and \texttt{b} are variables. \bigskip
|
||||||
|
|
||||||
|
\begin{tabular}{@{$\ \ \bullet\ \ $}ll}
|
||||||
|
\verb#\x. x# & \verb#\y. y# \\
|
||||||
|
\verb#(\x. x) a# & \verb#(\y. y) a# \\
|
||||||
|
\verb#(\x. x) a# & \verb#(\y. y) b# \\
|
||||||
|
\verb#(\x. x)# & \verb#(\x. y)# \\
|
||||||
|
\verb#(\x y. x /\ y)# & \verb#(\y x. x /\ y)# \\
|
||||||
|
\verb#(\x y. x /\ y) a a# & \verb#(\y x. x /\ y) a a# \\
|
||||||
|
\verb#a /\ b# & \verb#a /\ b# \\
|
||||||
|
\verb#!x. x + 1 > x# & \verb#!y. y + 1 > y# \\
|
||||||
|
\verb#?x. x = y + 2# & \verb#?x. x = z + 2# \\
|
||||||
|
\verb#!y. ?x. x = y + 2# & \verb#!z. ?x. x = z + 2# \\
|
||||||
|
\end{tabular}
|
||||||
|
|
||||||
|
\subsection{Constructing Terms}
|
||||||
|
|
||||||
|
Write a SML function \ml{mk\_imp\_conj\_term :\ int -> term} that constructs for inputs $n$ greater 1 the term \verb#!A1 ... An. A1 ==> ... ==> An ==> (A1 /\ ... /\ An)#. If $n$ is not greater one, a \ml{HOL\_ERR} exception (use \ml{failwith}). You might want to read up on \ml{boolSyntax} for this exercise. You can use list-make-functions like \ml{mk\_list\_conj}, but also use non-list ones.
|
||||||
|
|
||||||
|
|
||||||
|
\section{Basic Forward Proofs}
|
||||||
|
|
||||||
|
\subsection{Commutativity of Conjunction}
|
||||||
|
Prove the lemma \verb#!A B. (A /\ B) <=> (B /\ A)# using only inferences presented in the lecture.
|
||||||
|
|
||||||
|
\subsection{Simplifying Conjunction}
|
||||||
|
Prove the lemmas \verb#!A. (A /\ ~A) <=> F# and \verb#!A B. (A /\ ~A) /\ B <=> F#.
|
||||||
|
|
||||||
|
|
||||||
|
\section{Writing Own Automation}
|
||||||
|
|
||||||
|
\subsection{Implications between Conjunctions}
|
||||||
|
Write a function \texttt{show\_big\_conj\_imp :\ term -> term -> thm} that assumes that both terms are conjunctions and tries to prove that the first one implies the second one. It should be clever enough to handle \texttt{T} and \texttt{F}.
|
||||||
|
\verb#show_big_conj_imp ``a /\ (b /\ a) /\ c`` ``c /\ T /\ a``# for example should succeed with \verb#|- (a /\ (b /\ a) /\ c) ==> (c /\ T /\ a)#. It should also be able to show \verb#|- (a /\ F) /\ c ==> d#. If the implication cannot be shown, the function \texttt{show\_big\_conj\_imp} should raise \texttt{HOL\_ERR}.
|
||||||
|
|
||||||
|
For this exercise it might be useful to read up on \texttt{Term.compare} and the red-black sets and maps in directory \texttt{portableML}.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Equivalences between Conjunctions}
|
||||||
|
Use the function \texttt{show\_big\_conj\_imp} to now define a function \texttt{show\_big\_conj\_eq :\ term -> term -> thm} that tries to shows the equivalence between the input terms. If both input terms are alpha-equivalent, it should raise an \texttt{UNCHANGED} exception. If the equivalence cannot be proved, a \texttt{HOL\_ERR} exception should be raised.
|
||||||
|
|
||||||
|
\subsection{Duplicates in Conjunctions}
|
||||||
|
Use the function \texttt{show\_big\_conj\_eq} to implement a conversion \texttt{remove\_dups\_in\_conj\_CONV} that replaces duplicate appearances of a term in a large conjunction with \texttt{T}. Given the term \begin{center}\verb#a /\ (b /\ a) /\ c /\ b /\ a#
|
||||||
|
\end{center}
|
||||||
|
it should for example return the theorem
|
||||||
|
\begin{center}
|
||||||
|
\verb#|- (a /\ (b /\ a) /\ c /\ b /\ a) = (a /\ (b /\ T) /\ c /\ T /\ T)#.
|
||||||
|
\end{center}.
|
||||||
|
If no duplicates are found, \texttt{UNCHANGED} should be raised. If the input is not of type
|
||||||
|
\texttt{bool}, a \texttt{HOL\_ERR} should be raised.
|
||||||
|
|
||||||
|
\subsection{Contradictions in Conjunctions}
|
||||||
|
Use the function \texttt{show\_big\_conj\_eq} to implement a conversion \texttt{find\_contr\_in\_conj\_CONV} that searches for terms and their negations in a large conjunction. If such contradictions are found, the term should be converted to \texttt{F}.
|
||||||
|
Given the term \begin{center}\verb#a /\ (b /\ ~a) /\ c#
|
||||||
|
\end{center}
|
||||||
|
it should for example return the theorem
|
||||||
|
\begin{center}
|
||||||
|
\verb#|- (a /\ (b /\ ~a) /\ c) = F#.
|
||||||
|
\end{center}.
|
||||||
|
If no contradictions are found, \texttt{UNCHANGED} should be raised. If the input is not of type
|
||||||
|
\texttt{bool}, a \texttt{HOL\_ERR} should be raised.
|
||||||
|
|
||||||
|
|
||||||
|
\section{Squabbling Philosophers}
|
||||||
|
|
||||||
|
Recently keen historians were finally able to deduce where the less well-known ancient philosophers
|
||||||
|
Platon, Diogenes and Euklid came from (see background-questionnaire). However, in order to avoid being embarrassed by announcing some wrong result, they asked you to check their reasoning using HOL.
|
||||||
|
Can you help and show that Platon indeed came from Sparta?
|
||||||
|
|
||||||
|
\subsection{Download and Compile}
|
||||||
|
Get the file \texttt{philScript.sml}\webversion{}{ from the exercise repository\footnote{\url{https://gits-15.sys.kth.se/tuerk/ITP-exercises}}}. Compile it with \texttt{Holmake} to get a theory file. Read \texttt{philTheory.sig}.
|
||||||
|
|
||||||
|
\subsection{Proof}
|
||||||
|
Open the theory \texttt{philTheory} and prove \verb#Sp platon#. This is a simple first order logic problem. Therefore automated methods like resolution can solve it easily. HOL has such methods build in in the form of \eg the resolution based prover \ml{METIS}. For example,
|
||||||
|
\begin{center}
|
||||||
|
\verb#METIS_PROVE [PHIL_KNOWLEDGE] ``Sp platon``#
|
||||||
|
\end{center}
|
||||||
|
would already prove it. However, for learning, let us prove it via a low-level forward proof.
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item Using the lemma \texttt{MONO\_NOT} and the inference rules \texttt{MATCH\_MP},
|
||||||
|
\texttt{SPEC} and \texttt{IMP\_TRANS} show the lemma \verb#|- ~(W p) ==> Sp p#.
|
||||||
|
\item Similarly show \verb#|- ~(B p) ==> At p#.
|
||||||
|
\item Assume \verb#At platon# and using this show the lemma \verb#[At platon] |- F# with
|
||||||
|
\texttt{MP} and \texttt{MATCH\_MP}. You will need many steps and many different lemmata.
|
||||||
|
\item Using \texttt{DISCH}, \texttt{NOT\_INTRO} and \texttt{MATCH\_MP} show \verb#Sp platon#.
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
Don't forget to turn printing of assumptions on in HOL or you will have a hard time figuring out what is going on.
|
||||||
|
\end{document}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: t
|
||||||
|
%%% End:
|
65
exercises/e2Script.sml
Normal file
65
exercises/e2Script.sml
Normal file
@ -0,0 +1,65 @@
|
|||||||
|
open HolKernel Parse boolLib bossLib;
|
||||||
|
|
||||||
|
val _ = new_theory "e2";
|
||||||
|
|
||||||
|
val _ = Datatype `Philosopher = diogenes | platon | euklid`;
|
||||||
|
val Philosopher_nchotomy = DB.fetch "-" "Philosopher_nchotomy";
|
||||||
|
val Philosopher_distinct = DB.fetch "-" "Philosopher_distinct";
|
||||||
|
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE = new_specification ("PHIL_KNOWLEDGE", ["At", "Sp", "W", "B"],
|
||||||
|
prove (``?At Sp W B.
|
||||||
|
(!p. (Sp p ==> B p)) /\
|
||||||
|
(!p. (At p ==> W p)) /\
|
||||||
|
(!p. ~(Sp p) \/ ~(At p)) /\
|
||||||
|
(!p. (Sp p) \/ (At p)) /\
|
||||||
|
((Sp platon) ==> ~(W diogenes)) /\
|
||||||
|
((Sp euklid) ==> ~(B diogenes)) /\
|
||||||
|
((At diogenes) ==> ~(B euklid)) /\
|
||||||
|
((At platon) ==> ~(W euklid))``,
|
||||||
|
|
||||||
|
Q.EXISTS_TAC `\p. Philosopher_CASE p F F T` THEN
|
||||||
|
Q.EXISTS_TAC `\p. Philosopher_CASE p T T F` THEN
|
||||||
|
Q.EXISTS_TAC `\p. Philosopher_CASE p F T T` THEN
|
||||||
|
Q.EXISTS_TAC `\p. Philosopher_CASE p T T F` THEN
|
||||||
|
SIMP_TAC (srw_ss()++DatatypeSimps.expand_type_quants_ss [``:Philosopher``]) []));
|
||||||
|
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_a = store_thm ("PHIL_KNOWLEDGE_a", ``!p. Sp p ==> B p``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_b = store_thm ("PHIL_KNOWLEDGE_b", ``!p. At p ==> W p``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_c = store_thm ("PHIL_KNOWLEDGE_b", ``!p. ~(Sp p) \/ ~(At p)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_c1 = store_thm ("PHIL_KNOWLEDGE_c1", ``!p. Sp p ==> ~(At p)``,
|
||||||
|
PROVE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_c2 = store_thm ("PHIL_KNOWLEDGE_c2", ``!p. At p ==> ~(Sp p)``,
|
||||||
|
PROVE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_d = store_thm ("PHIL_KNOWLEDGE_d", ``!p. (Sp p) \/ (At p)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_d1 = store_thm ("PHIL_KNOWLEDGE_d1", ``!p. ~(Sp p) ==> At p``,
|
||||||
|
PROVE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_d2 = store_thm ("PHIL_KNOWLEDGE_d2", ``!p. ~(At p) ==> Sp p``,
|
||||||
|
PROVE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_e = store_thm ("PHIL_KNOWLEDGE_e", ``(Sp platon) ==> ~(W diogenes)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_f = store_thm ("PHIL_KNOWLEDGE_f", ``(Sp euklid) ==> ~(B diogenes)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_g = store_thm ("PHIL_KNOWLEDGE_g", ``(At diogenes) ==> ~(B euklid)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_h = store_thm ("PHIL_KNOWLEDGE_g", ``(At platon) ==> ~(W euklid)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val _ = export_theory();
|
||||||
|
|
116
exercises/e3.tex
Normal file
116
exercises/e3.tex
Normal file
@ -0,0 +1,116 @@
|
|||||||
|
\documentclass[a4paper,10pt,oneside]{scrartcl}
|
||||||
|
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
\usepackage[a4paper]{geometry}
|
||||||
|
\usepackage{hyperref}
|
||||||
|
\usepackage{url}
|
||||||
|
\usepackage{color}
|
||||||
|
\usepackage{amsfonts}
|
||||||
|
|
||||||
|
\input{../hol_commands.inc}
|
||||||
|
|
||||||
|
\title{Exercise 3}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\usekomafont{sectioning}\usekomafont{part}ITP Exercise 3
|
||||||
|
\webversion{}{\\\small{due Friday 12th May}}
|
||||||
|
\end{center}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
|
||||||
|
\section{Self-Study}
|
||||||
|
|
||||||
|
\subsection{Tactics and Tacticals}
|
||||||
|
You should read background information on the tactics mentioned in the
|
||||||
|
lecture. For each of the tactics and tacticals mentioned on the
|
||||||
|
slides, read the entry in the description manual.
|
||||||
|
|
||||||
|
\subsection{hol-mode}
|
||||||
|
|
||||||
|
Carefully study the \texttt{Goalstack} submenu of hol-mode. Learn the keycodes to
|
||||||
|
set goals, expand tactics, undo the last tactic expansion, restart the current proof and
|
||||||
|
drop the current goal.
|
||||||
|
|
||||||
|
\subsection{num and list Theories}
|
||||||
|
|
||||||
|
We will use the natural number theory \texttt{numTheory} as well as the list theories
|
||||||
|
\texttt{listTheory} and \texttt{rich\_listTheory} a lot. Please familiarise yourself with
|
||||||
|
these HOL theories (e.\,g.\ by looking at their signature in the HTML version of the HOL Reference). I also recommend reading up on other common theories like \texttt{optionTheory}, \texttt{oneTheory} and \texttt{pairTheory}. These won't be needed for this weeks exercises, though.
|
||||||
|
|
||||||
|
|
||||||
|
\section{Backward Proofs}
|
||||||
|
|
||||||
|
Part of this exercise is searching for useful existing lemmata (\eg
|
||||||
|
useful rewrite rules). Another part is understanding the effect of
|
||||||
|
different rewrite rules and their combination. Even if you have
|
||||||
|
experience with using HOL, please refrain from using automated rewrite
|
||||||
|
tools that spoil these purposes and have not been covered in the
|
||||||
|
lecture yet. Please don't use HOL's simplifier and especially don't
|
||||||
|
use stateful simp-sets. Similarly, please don't use the compute lib
|
||||||
|
via \eg \texttt{EVAL\_TAC}.
|
||||||
|
|
||||||
|
For these exercise, it might be beneficial to open the modules \texttt{listTheory} and \texttt{rich\_listTheory} via \ml{open listTheory rich\_listTheory}. You will notice that when opening them a lot of definitions are printed. This can consume quite some time when opening many large theories. Play around with the hol-mode commands
|
||||||
|
\texttt{Send region to HOL - hide non-errors} and \texttt{Quite - hide output except errors} to avoid this printout and the associated waiting time.
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Replay Proofs from Lecture}
|
||||||
|
|
||||||
|
If you never did a tactical proof in HOL before, I recommend following the example interactive proofs from Part VIII.
|
||||||
|
Type them in your own HOL session, make the same mistakes shown in the lecture. Use hol-mode to control the goalStack via
|
||||||
|
commands like expand, back-up, set goal, drop goal. Get a feeling for how to interactively develop a tactical proof.
|
||||||
|
|
||||||
|
This exercise is optional. If you are confident enough, feel free to skip it.
|
||||||
|
|
||||||
|
\subsection{Formalise Induction Proofs from Exercise 1}
|
||||||
|
|
||||||
|
For exercise sheet 1, some simple properties of appending lists were proved with pen and paper via induction. Let's now try to prove them formally using HOL. Prove \hol{!l.\ l ++ [] = l} via induction using the definition of \hol{APPEND} (\hol{++}). Similarly, prove the associativity of \texttt{APPEND}, \ie prove \hol{!l1 l2 l3.\ l1 ++ (l2 ++ l3) = (l1 ++ l2) ++ l3}.
|
||||||
|
|
||||||
|
\subsection{Reverse}
|
||||||
|
|
||||||
|
In HOL, \texttt{revAppend} is called \hol{REV}. Using any useful lemmata you can find, prove
|
||||||
|
the lemma \hol{!l1 l2.\ LENGTH (REV l1 l2) = (LENGTH l1 + LENGTH l2)}. Now, let us as an exercise reprove the existing lemmata \hol{REVERSE\_REV} and \hol{REV\_REVERSE\_LEM}. This means, prove, first prove \hol{!l1 l2.\ REV l1 l2 = REVERSE l1 ++ l2}. Then prove \hol{!l.\ REVERSE l = REV l []} using this theorem. You should not use the theorems \texttt{REVERSE\_REV} or \texttt{REV\_REVERSE\_LEM} in these proofs.
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Length of Drop}
|
||||||
|
|
||||||
|
Prove \hol{!l1 l2.\ LENGTH (DROP (LENGTH l2) (l1 ++ l2)) = LENGTH l1} with induction, \ie
|
||||||
|
without using lemmata like \texttt{LENGTH\_DROP}. Do one proof with \texttt{Induct\_on} and a very similar proof with \texttt{Induct}. This is a bit tricky.
|
||||||
|
Please play around with the proof for some time. If you can't figure it out, look at the hints at the end of this exercise sheet.
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Making Change}
|
||||||
|
|
||||||
|
On exercise sheet 1, you were asked to implement a function \texttt{make\_change} in SML. Let's now define it in HOL and prove some properties. Define the function \texttt{MAKE\_CHANGE} in HOL via
|
||||||
|
|
||||||
|
\begin{verbatim}
|
||||||
|
val MAKE_CHANGE_def = Define `
|
||||||
|
(MAKE_CHANGE [] a = if (a = 0) then [[]] else []) /\
|
||||||
|
(MAKE_CHANGE (c::cs) a = (
|
||||||
|
(if (c <= a /\ 0 < a) then
|
||||||
|
(MAP (\l. c::l) (MAKE_CHANGE cs (a - c)))
|
||||||
|
else []) ++ (MAKE_CHANGE cs a)))`;
|
||||||
|
\end{verbatim}
|
||||||
|
|
||||||
|
Prove that \hol{!cs.\ MAKE\_CHANGE cs 0 = [[]]} and
|
||||||
|
\hol{!cs a l.\ MEM l (MAKE\_CHANGE cs a) ==> (SUM l = a)} hold.
|
||||||
|
|
||||||
|
|
||||||
|
\section{Hints}
|
||||||
|
|
||||||
|
\subsection{Length of Drop}
|
||||||
|
|
||||||
|
For proving \hol{!l1 l2.\ LENGTH (DROP (LENGTH l2) (l1 ++ l2)) = LENGTH l1} induction on
|
||||||
|
the structure of \hol{l2} is a good strategy. However, one needs to be careful that \hol{l1} stays
|
||||||
|
universally quantified. Expanding naively with \hol{GEN\_TAC >> Induct} will
|
||||||
|
remove the needed universal quantification of \hol{l1}.
|
||||||
|
|
||||||
|
To solve this, you can either use \hol{Induct\_on `l2`} or get rid of both universal quantifiers and then introduce them in a different order again. This is achieved by \hol{REPEAT GEN\_TAC >> SPEC\_TAC (``l1:'a list``, ``l1:'a list``) >> SPEC\_TAC (``l2:'a list``, ``l2:'a list``)}.
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: t
|
||||||
|
%%% End:
|
107
exercises/e3Script.sml
Normal file
107
exercises/e3Script.sml
Normal file
@ -0,0 +1,107 @@
|
|||||||
|
open HolKernel Parse boolLib bossLib;
|
||||||
|
|
||||||
|
open rich_listTheory arithmeticTheory
|
||||||
|
val _ = new_theory "e3";
|
||||||
|
|
||||||
|
|
||||||
|
``!l. APPEND l [] = l``
|
||||||
|
|
||||||
|
Induct >| [
|
||||||
|
REWRITE_TAC [APPEND],
|
||||||
|
ASM_REWRITE_TAC [APPEN]
|
||||||
|
])
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
open listTheory
|
||||||
|
|
||||||
|
|
||||||
|
val``!l1 l2. REV l1 l2 = REVERSE l1 ++ l2``
|
||||||
|
|
||||||
|
Induct >| [
|
||||||
|
REWRITE_TAC [REV_DEF, REVERSE, APPEND],
|
||||||
|
ASM_REWRITE_TAC [REVERSE, REV_DEF, APPEND_SNOC1]
|
||||||
|
])
|
||||||
|
|
||||||
|
``!l. REVERSE l = REV l []``
|
||||||
|
|
||||||
|
Induct >| [
|
||||||
|
REWRITE_TAC [REV_DEF, REVERSE],
|
||||||
|
ASM_REWRITE_TAC [REVERSE, REV_DEF]
|
||||||
|
])
|
||||||
|
|
||||||
|
|
||||||
|
``!l1 l2. LENGTH (DROP (LENGTH l2) (l1 ++ l2)) = LENGTH l1``
|
||||||
|
|
||||||
|
Induct >| [
|
||||||
|
REWRITE_TAC[LENGTH, APPEND, DROP_LENGTH_NIL],
|
||||||
|
|
||||||
|
Cases_on `l2` |> [
|
||||||
|
REWRITE_TAC[APPEND_NIL, LENGTH, DROP],
|
||||||
|
|
||||||
|
ASM_REWRITE_TAC[APPEND, LENGTH, DROP]
|
||||||
|
|
||||||
|
|
||||||
|
``!l1 l2. LENGTH (DROP (LENGTH l2) (l1 ++ l2)) = LENGTH l1``
|
||||||
|
|
||||||
|
REPEAT GEN_TAC >>
|
||||||
|
SPEC_TAC (``l1:'a list``, ``l1:'a list``) THEN
|
||||||
|
Induct_on `l2` >| [
|
||||||
|
REWRITE_TAC[LENGTH, DROP, APPEND_NIL],
|
||||||
|
|
||||||
|
Cases_on `l1` >| [
|
||||||
|
REWRITE_TAC[LENGTH, DROP, APPEND, DROP_LENGTH_NIL],
|
||||||
|
|
||||||
|
REPEAT GEN_TAC >>
|
||||||
|
REWRITE_TAC[LENGTH, DROP, APPEND] THEN
|
||||||
|
`t ++ h' :: l2 = (t ++ [h']) ++ l2` by REWRITE_TAC[APPEND, GSYM APPEND_ASSOC] >>
|
||||||
|
ASM_REWRITE_TAC[] >>
|
||||||
|
REWRITE_TAC[LENGTH_APPEND, LENGTH, ADD_CLAUSES]
|
||||||
|
]
|
||||||
|
])
|
||||||
|
|
||||||
|
|
||||||
|
val MAKE_CHANGE_def = Define `
|
||||||
|
(MAKE_CHANGE [] a = if (a = 0) then [[]] else []) /\
|
||||||
|
(MAKE_CHANGE (c::cs) a = (
|
||||||
|
(if (c <= a /\ 0 < a) then
|
||||||
|
(MAP (\l. c::l) (MAKE_CHANGE cs (a - c)))
|
||||||
|
else []) ++ (MAKE_CHANGE cs a)))`
|
||||||
|
|
||||||
|
|
||||||
|
val MAKE_CHANGE_PROP = prove(``!cs. MAKE_CHANGE cs 0 = [[]]``,
|
||||||
|
|
||||||
|
Induct >| [
|
||||||
|
REWRITE_TAC[MAKE_CHANGE_def],
|
||||||
|
ASM_REWRITE_TAC[MAKE_CHANGE_def, prim_recTheory.LESS_REFL, APPEND]
|
||||||
|
])
|
||||||
|
|
||||||
|
val MAKE_CHANGE_PROP1 = prove (``!cs a l.
|
||||||
|
MEM l (MAKE_CHANGE cs a) ==> (SUM l = a)``,
|
||||||
|
|
||||||
|
Induct >| [
|
||||||
|
REPEAT GEN_TAC >>
|
||||||
|
REWRITE_TAC[MAKE_CHANGE_def] >>
|
||||||
|
Cases_on `a = 0` >> (
|
||||||
|
ASM_REWRITE_TAC[MEM] THEN
|
||||||
|
REPEAT STRIP_TAC THEN
|
||||||
|
ASM_REWRITE_TAC[SUM]
|
||||||
|
),
|
||||||
|
|
||||||
|
REPEAT GEN_TAC >>
|
||||||
|
ASM_REWRITE_TAC [MAKE_CHANGE_def, MEM_APPEND, DISJ_IMP_THM] >>
|
||||||
|
Cases_on `h <= a /\ 0 < a` >| [
|
||||||
|
ASM_REWRITE_TAC[MEM_MAP] THEN
|
||||||
|
BETA_TAC THEN
|
||||||
|
REPEAT STRIP_TAC THEN
|
||||||
|
`SUM y = (a - h)` by METIS_TAC[] >>
|
||||||
|
ASM_REWRITE_TAC [SUM] >>
|
||||||
|
DECIDE_TAC,
|
||||||
|
|
||||||
|
ASM_REWRITE_TAC[MEM]
|
||||||
|
]
|
||||||
|
]);
|
||||||
|
|
||||||
|
|
||||||
|
val _ = export_theory();
|
||||||
|
|
24
exercises/e4-material/dot_graphsLib.sig
Normal file
24
exercises/e4-material/dot_graphsLib.sig
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
signature dot_graphsLib =
|
||||||
|
sig
|
||||||
|
type array_graph
|
||||||
|
|
||||||
|
(* binary for running dot *)
|
||||||
|
val dot_binary : string ref;
|
||||||
|
|
||||||
|
(* a fresh, empty one *)
|
||||||
|
val new_array_graph : array_graph
|
||||||
|
|
||||||
|
(* add a node to a graph with number n and the term option content *)
|
||||||
|
val add_node : array_graph -> int -> Abbrev.term option -> array_graph
|
||||||
|
|
||||||
|
(* adds a link between two nodes in the graph *)
|
||||||
|
val add_node_link : array_graph -> int -> int -> string -> array_graph
|
||||||
|
|
||||||
|
(* Various outputs *)
|
||||||
|
val print_graph : array_graph -> unit
|
||||||
|
val graph_to_string : array_graph -> string
|
||||||
|
val show_graph : array_graph -> unit
|
||||||
|
val write_graph_to_dot_file : array_graph -> string -> unit
|
||||||
|
val write_graph_to_png_file : array_graph -> string -> unit
|
||||||
|
|
||||||
|
end
|
62
exercises/e4-material/dot_graphsLib.sml
Normal file
62
exercises/e4-material/dot_graphsLib.sml
Normal file
@ -0,0 +1,62 @@
|
|||||||
|
structure dot_graphsLib :> dot_graphsLib =
|
||||||
|
struct
|
||||||
|
|
||||||
|
open HolKernel Parse
|
||||||
|
|
||||||
|
datatype array_graph = AG of string
|
||||||
|
|
||||||
|
val new_array_graph = AG "";
|
||||||
|
|
||||||
|
(* Auxiliary functions *)
|
||||||
|
fun AG_add (AG s) s' = AG (s ^ " " ^ s' ^ "\n")
|
||||||
|
fun node_name n = ("node_"^(int_to_string n))
|
||||||
|
|
||||||
|
(* create a new node with number n and value v in graph ag *)
|
||||||
|
fun add_node ag (n:int) (v : term option) = let
|
||||||
|
val n_s = node_name n;
|
||||||
|
val v_s = case v of NONE => "-"
|
||||||
|
| SOME t => "'" ^ (term_to_string t) ^ "'"
|
||||||
|
val new_s = (n_s ^ " [label=\"" ^ (int_to_string n) ^": "^v_s^"\"]")
|
||||||
|
in
|
||||||
|
AG_add ag new_s
|
||||||
|
end
|
||||||
|
|
||||||
|
(* Add a link between nodes n1 and n2 *)
|
||||||
|
fun add_node_link ag (n1:int) (n2 : int) (label : string) = let
|
||||||
|
val new_s = (node_name n1) ^ " -> " ^ (node_name n2) ^ " [label=\""^label^"\"]";
|
||||||
|
in
|
||||||
|
AG_add ag new_s
|
||||||
|
end
|
||||||
|
|
||||||
|
fun graph_to_string (AG s) = "digraph G {\n" ^ s ^ "}\n\n"
|
||||||
|
fun print_graph ag = (TextIO.print (graph_to_string ag))
|
||||||
|
|
||||||
|
fun write_graph_to_dot_file ag file_name = let
|
||||||
|
val os = TextIO.openOut file_name;
|
||||||
|
val _ = TextIO.output (os, graph_to_string ag);
|
||||||
|
val _ = TextIO.closeOut os
|
||||||
|
in
|
||||||
|
()
|
||||||
|
end
|
||||||
|
|
||||||
|
val dot_binary = ref "/usr/bin/dot";
|
||||||
|
|
||||||
|
fun show_graph ag = let
|
||||||
|
val p = Unix.execute (!dot_binary, ["-Tx11"])
|
||||||
|
val os = Unix.textOutstreamOf p
|
||||||
|
val _ = TextIO.output (os, graph_to_string ag)
|
||||||
|
val _ = TextIO.closeOut os
|
||||||
|
in
|
||||||
|
()
|
||||||
|
end
|
||||||
|
|
||||||
|
fun write_graph_to_png_file ag filename = let
|
||||||
|
val p = Unix.execute (!dot_binary, ["-Tpng", "-o", filename])
|
||||||
|
val os = Unix.textOutstreamOf p
|
||||||
|
val _ = TextIO.output (os, graph_to_string ag)
|
||||||
|
val _ = TextIO.closeOut os
|
||||||
|
in
|
||||||
|
()
|
||||||
|
end
|
||||||
|
|
||||||
|
end
|
116
exercises/e4-material/e4_arraysLib.sml
Normal file
116
exercises/e4-material/e4_arraysLib.sml
Normal file
@ -0,0 +1,116 @@
|
|||||||
|
structure e4_arraysLib :> e4_arraysLib =
|
||||||
|
struct
|
||||||
|
|
||||||
|
open HolKernel Parse bossLib e4_arraysTheory dot_graphsLib
|
||||||
|
|
||||||
|
(* Example
|
||||||
|
|
||||||
|
val ag = let
|
||||||
|
val ag = new_array_graph
|
||||||
|
val ag = add_node ag 1 NONE
|
||||||
|
val ag = add_node ag 2 NONE
|
||||||
|
val ag = add_node ag 3 NONE
|
||||||
|
val ag = add_node ag 4 (SOME ``A /\ B``)
|
||||||
|
val ag = add_node ag 5 NONE
|
||||||
|
val ag = add_node_link ag 1 2 "a";
|
||||||
|
val ag = add_node_link ag 1 3 "b";
|
||||||
|
val ag = add_node_link ag 3 4 "c";
|
||||||
|
val ag = add_node_link ag 4 5 "d";
|
||||||
|
in
|
||||||
|
ag
|
||||||
|
end
|
||||||
|
|
||||||
|
val _ = (dot_binary := "/usr/bin/dot");
|
||||||
|
|
||||||
|
|
||||||
|
val _ = print_graph ag
|
||||||
|
val _ = show_graph ag
|
||||||
|
|
||||||
|
|
||||||
|
*)
|
||||||
|
|
||||||
|
fun simple_array n = let
|
||||||
|
val n_t = numSyntax.term_of_int n
|
||||||
|
val thm = EVAL ``FOLDL (\a n. UPDATE n a n) EMPTY_ARRAY (COUNT_LIST ^n_t)``
|
||||||
|
in
|
||||||
|
rhs (concl thm)
|
||||||
|
end
|
||||||
|
|
||||||
|
fun sparse_array n = let
|
||||||
|
val n_t = numSyntax.term_of_int n
|
||||||
|
val thm = EVAL ``FOLDL (\a n. UPDATE n a (n*3)) EMPTY_ARRAY (COUNT_LIST ^n_t)``
|
||||||
|
in
|
||||||
|
rhs (concl thm)
|
||||||
|
end
|
||||||
|
|
||||||
|
val a1 = simple_array 10;
|
||||||
|
val a2 = sparse_array 10;
|
||||||
|
val a3 = simple_array 20;
|
||||||
|
val a4 = simple_array 100;
|
||||||
|
|
||||||
|
|
||||||
|
fun is_array_leaf t = same_const t ``Leaf``
|
||||||
|
|
||||||
|
fun dest_array_node t = let
|
||||||
|
val (c, args) = strip_comb t
|
||||||
|
val _ = if (same_const c ``Node``) then () else fail()
|
||||||
|
|
||||||
|
val vo = SOME (optionSyntax.dest_some (el 2 args)) handle HOL_ERR _ => NONE
|
||||||
|
in
|
||||||
|
(el 1 args, vo, el 3 args)
|
||||||
|
end
|
||||||
|
|
||||||
|
val is_array_node = can dest_array_node
|
||||||
|
|
||||||
|
fun graph_of_array_aux ag level suff t =
|
||||||
|
if (is_array_leaf t) then (NONE, ag) else
|
||||||
|
let
|
||||||
|
val (l, vo, r) = dest_array_node t
|
||||||
|
val n = level + suff
|
||||||
|
val m = n - 1;
|
||||||
|
val ag = add_node ag m vo
|
||||||
|
val (l_n, ag) = graph_of_array_aux ag (level*2) n l
|
||||||
|
val ag = case l_n of NONE => ag
|
||||||
|
| SOME ln => add_node_link ag m ln "l"
|
||||||
|
val (r_n, ag) = graph_of_array_aux ag (level*2) suff r
|
||||||
|
val ag = case r_n of NONE => ag
|
||||||
|
| SOME rn => add_node_link ag m rn "r"
|
||||||
|
in
|
||||||
|
(SOME m, ag)
|
||||||
|
end handle HOL_ERR _ => (NONE, ag)
|
||||||
|
|
||||||
|
fun graph_of_array t =
|
||||||
|
snd (graph_of_array_aux new_array_graph 1 0 t)
|
||||||
|
|
||||||
|
|
||||||
|
show_graph (graph_of_array a1)
|
||||||
|
show_graph (graph_of_array a2)
|
||||||
|
|
||||||
|
print_graph (graph_of_array a1)
|
||||||
|
|
||||||
|
EVAL ``num2boolList 5``
|
||||||
|
a1
|
||||||
|
|
||||||
|
Node
|
||||||
|
(Node (Node Leaf (SOME 6) Leaf)
|
||||||
|
(SOME 2)
|
||||||
|
(Node Leaf (SOME 5) Leaf))
|
||||||
|
|
||||||
|
(SOME 0)
|
||||||
|
|
||||||
|
(Node
|
||||||
|
(Node Leaf (SOME 4) (Node Leaf (SOME 9) Leaf))
|
||||||
|
|
||||||
|
(SOME 1)
|
||||||
|
|
||||||
|
(Node (Node Leaf (SOME 8) Leaf) (SOME 3)
|
||||||
|
|
||||||
|
(Node Leaf (SOME 7) Leaf)))
|
||||||
|
|
||||||
|
end
|
||||||
|
|
||||||
|
|
||||||
|
print_graph (graph_of_array a2);
|
||||||
|
|
||||||
|
|
||||||
|
show_graph (graph_of_array (simple_array 15));
|
235
exercises/e4-material/e4_arraysScript.sml
Normal file
235
exercises/e4-material/e4_arraysScript.sml
Normal file
@ -0,0 +1,235 @@
|
|||||||
|
open HolKernel Parse boolLib bossLib;
|
||||||
|
|
||||||
|
val _ = new_theory "e4_arrays";
|
||||||
|
|
||||||
|
|
||||||
|
(**************************************************)
|
||||||
|
(* Provided part *)
|
||||||
|
(**************************************************)
|
||||||
|
|
||||||
|
val num2boolList_def = Define `
|
||||||
|
(num2boolList 0 = []) /\
|
||||||
|
(num2boolList 1 = []) /\
|
||||||
|
(num2boolList n = (EVEN n) :: num2boolList (n DIV 2))`
|
||||||
|
|
||||||
|
(* The resulting definition is hard to apply and
|
||||||
|
rewriting with it loops easily. So let's provide
|
||||||
|
a decent lemma capturing the semantics *)
|
||||||
|
|
||||||
|
val num2boolList_REWRS = store_thm ("num2boolList_REWRS",
|
||||||
|
``(num2boolList 0 = []) /\
|
||||||
|
(num2boolList 1 = []) /\
|
||||||
|
(!n. 2 <= n ==> ((num2boolList n = (EVEN n) :: num2boolList (n DIV 2))))``,
|
||||||
|
REPEAT STRIP_TAC >| [
|
||||||
|
METIS_TAC[num2boolList_def],
|
||||||
|
METIS_TAC[num2boolList_def],
|
||||||
|
|
||||||
|
`n <> 0 /\ n <> 1` by DECIDE_TAC >>
|
||||||
|
METIS_TAC[num2boolList_def]
|
||||||
|
]);
|
||||||
|
|
||||||
|
|
||||||
|
(* It is aslo useful to show when the list is empty *)
|
||||||
|
val num2boolList_EQ_NIL = store_thm ("num2boolList_EQ_NIL",
|
||||||
|
``!n. (num2boolList n = []) <=> ((n = 0) \/ (n = 1))``,
|
||||||
|
GEN_TAC >> EQ_TAC >| [
|
||||||
|
REPEAT STRIP_TAC >>
|
||||||
|
CCONTR_TAC >>
|
||||||
|
FULL_SIMP_TAC list_ss [num2boolList_REWRS],
|
||||||
|
|
||||||
|
REPEAT STRIP_TAC >> (
|
||||||
|
ASM_SIMP_TAC std_ss [num2boolList_REWRS]
|
||||||
|
)
|
||||||
|
]);
|
||||||
|
|
||||||
|
|
||||||
|
(* Now the awkward arithmetic part. Let's show that num2boolList is injective *)
|
||||||
|
|
||||||
|
(* For demonstration, let's define our own induction theorem *)
|
||||||
|
val MY_NUM_INDUCT = store_thm ("MY_NUM_INDUCT",
|
||||||
|
``!P. P 1 /\ (!n. (2 <= n /\ (!m. (m < n /\ m <> 0) ==> P m)) ==> P n) ==> (!n. n <> 0 ==> P n)``,
|
||||||
|
REPEAT STRIP_TAC >>
|
||||||
|
completeInduct_on `n` >>
|
||||||
|
Cases_on `n` >> FULL_SIMP_TAC arith_ss [] >>
|
||||||
|
Cases_on `n'` >> ASM_SIMP_TAC arith_ss [])
|
||||||
|
|
||||||
|
val num2boolList_INJ = store_thm ("num2boolList_INJ",
|
||||||
|
``!n. n <> 0 ==> !m. m <> 0 ==> (num2boolList n = num2boolList m) ==> (n = m)``,
|
||||||
|
|
||||||
|
HO_MATCH_MP_TAC MY_NUM_INDUCT >>
|
||||||
|
CONJ_TAC >- (
|
||||||
|
SIMP_TAC list_ss [num2boolList_REWRS, num2boolList_EQ_NIL]
|
||||||
|
) >>
|
||||||
|
GEN_TAC >> STRIP_TAC >> GEN_TAC >> STRIP_TAC >>
|
||||||
|
Cases_on `m = 1` >- (
|
||||||
|
ASM_SIMP_TAC list_ss [num2boolList_REWRS]
|
||||||
|
) >>
|
||||||
|
ASM_SIMP_TAC list_ss [num2boolList_REWRS] >>
|
||||||
|
REPEAT STRIP_TAC >>
|
||||||
|
`n DIV 2 = m DIV 2` by (
|
||||||
|
`(m DIV 2 <> 0) /\ (n DIV 2 <> 0) /\ (n DIV 2 < n)` suffices_by METIS_TAC[] >>
|
||||||
|
|
||||||
|
ASM_SIMP_TAC arith_ss [arithmeticTheory.NOT_ZERO_LT_ZERO,
|
||||||
|
arithmeticTheory.X_LT_DIV]
|
||||||
|
) >>
|
||||||
|
`n MOD 2 = m MOD 2` by (
|
||||||
|
ASM_SIMP_TAC std_ss [arithmeticTheory.MOD_2]
|
||||||
|
) >>
|
||||||
|
`0 < 2` by DECIDE_TAC >>
|
||||||
|
METIS_TAC[arithmeticTheory.DIVISION]);
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
(* Shifting the keys by one is trivial and by this we get rid of the
|
||||||
|
preconditions of the injectivity theorem *)
|
||||||
|
val num2arrayIndex_def = Define `num2arrayIndex n = (num2boolList (SUC n))`
|
||||||
|
val num2arrayIndex_INJ = store_thm ("num2arrayIndex_INJ",
|
||||||
|
``!n m. (num2arrayIndex n = num2arrayIndex m) <=> (n = m)``,
|
||||||
|
|
||||||
|
SIMP_TAC list_ss [num2arrayIndex_def] >>
|
||||||
|
METIS_TAC [numTheory.NOT_SUC, num2boolList_INJ, numTheory.INV_SUC]);
|
||||||
|
|
||||||
|
|
||||||
|
(* Now let's define the inverse operation *)
|
||||||
|
val boolList2num_def = Define `
|
||||||
|
(boolList2num [] = 1) /\
|
||||||
|
(boolList2num (F::idx) = 2 * boolList2num idx + 1) /\
|
||||||
|
(boolList2num (T::idx) = 2 * boolList2num idx)`
|
||||||
|
|
||||||
|
(* We can show that the inverse is never 0 ... *)
|
||||||
|
val boolList2num_GT_0 = prove (``!idx. 0 < boolList2num idx``,
|
||||||
|
Induct >- SIMP_TAC arith_ss [boolList2num_def] >>
|
||||||
|
Cases >> ASM_SIMP_TAC arith_ss [boolList2num_def]);
|
||||||
|
|
||||||
|
(* ... so we can subtract 1 for the index shift *)
|
||||||
|
val arrayIndex2num_def = Define `arrayIndex2num idx = PRE (boolList2num idx)`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
(* Now a fiddly prove that we indeed defined the inverse *)
|
||||||
|
val boolList2num_inv = prove (``!idx. num2boolList (boolList2num idx) = idx``,
|
||||||
|
Induct >- (
|
||||||
|
SIMP_TAC arith_ss [boolList2num_def, num2boolList_REWRS]
|
||||||
|
) >>
|
||||||
|
`0 < boolList2num idx` by METIS_TAC[boolList2num_GT_0] >>
|
||||||
|
`0 < 2` by DECIDE_TAC >>
|
||||||
|
Cases >| [
|
||||||
|
`!x. (2 * x) MOD 2 = 0` by
|
||||||
|
METIS_TAC[arithmeticTheory.MOD_EQ_0, arithmeticTheory.MULT_COMM] >>
|
||||||
|
`!x. (2 * x) DIV 2 = x` by
|
||||||
|
METIS_TAC[arithmeticTheory.MULT_DIV, arithmeticTheory.MULT_COMM] >>
|
||||||
|
ASM_SIMP_TAC list_ss [boolList2num_def, num2boolList_REWRS,
|
||||||
|
arithmeticTheory.EVEN_MOD2],
|
||||||
|
|
||||||
|
`!x y. (2 * x + y) MOD 2 = (y MOD 2)` by
|
||||||
|
METIS_TAC[arithmeticTheory.MOD_TIMES, arithmeticTheory.MULT_COMM] >>
|
||||||
|
`!x y. (2 * x + y) DIV 2 = x + y DIV 2` by
|
||||||
|
METIS_TAC[arithmeticTheory.ADD_DIV_ADD_DIV, arithmeticTheory.MULT_COMM] >>
|
||||||
|
ASM_SIMP_TAC list_ss [boolList2num_def, num2boolList_REWRS,
|
||||||
|
arithmeticTheory.EVEN_MOD2]
|
||||||
|
]);
|
||||||
|
|
||||||
|
(* Shifting is easy then *)
|
||||||
|
val arrayIndex2num_inv = store_thm ("arrayIndex2num_inv",
|
||||||
|
``!idx. num2arrayIndex (arrayIndex2num idx) = idx``,
|
||||||
|
GEN_TAC >>
|
||||||
|
REWRITE_TAC[num2arrayIndex_def, arrayIndex2num_def] >>
|
||||||
|
`0 < boolList2num idx` by METIS_TAC[boolList2num_GT_0] >>
|
||||||
|
FULL_SIMP_TAC arith_ss [arithmeticTheory.SUC_PRE] >>
|
||||||
|
ASM_SIMP_TAC std_ss [boolList2num_inv]);
|
||||||
|
|
||||||
|
|
||||||
|
(* It is also very easy to derive other useful properties. *)
|
||||||
|
val num2arrayIndex_inv = store_thm ("num2arrayIndex_inv",
|
||||||
|
``!n. arrayIndex2num (num2arrayIndex n) = n``,
|
||||||
|
METIS_TAC[ num2arrayIndex_INJ, arrayIndex2num_inv]);
|
||||||
|
|
||||||
|
val arrayIndex2num_INJ = store_thm ("arrayIndex2num_INJ",
|
||||||
|
``!idx1 idx2. (arrayIndex2num idx1 = arrayIndex2num idx2) <=> (idx1 = idx2)``,
|
||||||
|
METIS_TAC[ num2arrayIndex_INJ, arrayIndex2num_inv]);
|
||||||
|
|
||||||
|
|
||||||
|
(* A rewrite for the top-level inverse might be handy *)
|
||||||
|
val num2arrayIndex_REWRS = store_thm ("num2arrayIndex_REWRS", ``
|
||||||
|
!n. num2arrayIndex n =
|
||||||
|
if (n = 0) then [] else
|
||||||
|
ODD n :: num2arrayIndex ((n - 1) DIV 2)``,
|
||||||
|
|
||||||
|
REWRITE_TAC[num2arrayIndex_def] >>
|
||||||
|
Cases >> SIMP_TAC arith_ss [num2boolList_REWRS] >>
|
||||||
|
SIMP_TAC arith_ss [arithmeticTheory.ODD, arithmeticTheory.EVEN,
|
||||||
|
arithmeticTheory.ODD_EVEN] >>
|
||||||
|
`(!x r. (2 * x + r) DIV 2 = x + r DIV 2) /\ (!x. (2*x) DIV 2 = x)` by (
|
||||||
|
`0 < 2` by DECIDE_TAC >>
|
||||||
|
METIS_TAC[arithmeticTheory.ADD_DIV_ADD_DIV, arithmeticTheory.MULT_COMM,
|
||||||
|
arithmeticTheory.MULT_DIV]
|
||||||
|
) >>
|
||||||
|
Cases_on `EVEN n'` >> ASM_REWRITE_TAC[] >| [
|
||||||
|
`?m. n' = 2* m` by METIS_TAC[arithmeticTheory.EVEN_ODD_EXISTS] >>
|
||||||
|
ASM_SIMP_TAC arith_ss [arithmeticTheory.ADD1],
|
||||||
|
|
||||||
|
`?m. n' = SUC (2* m)` by METIS_TAC[arithmeticTheory.EVEN_ODD_EXISTS,
|
||||||
|
arithmeticTheory.ODD_EVEN] >>
|
||||||
|
ASM_SIMP_TAC arith_ss [arithmeticTheory.ADD1]
|
||||||
|
]);
|
||||||
|
|
||||||
|
|
||||||
|
(**************************************************)
|
||||||
|
(* YOU SHOULD WORK FROM HERE ON *)
|
||||||
|
(**************************************************)
|
||||||
|
|
||||||
|
(* TODO: Define a datatype for arrays storing values of type 'a. *)
|
||||||
|
val _ = Datatype `array = DUMMY 'a`
|
||||||
|
|
||||||
|
|
||||||
|
(* TODO: Define a new, empty array *)
|
||||||
|
val EMPTY_ARRAY_def = Define `EMPTY_ARRAY : 'a array = ARB`
|
||||||
|
|
||||||
|
(* TODO: define ILOOKUP, IUPDATE and IREMOVE *)
|
||||||
|
val IUPDATE_def = Define `IUPDATE (v : 'a) (a : 'a array) (k : bool list) = a:'a array`
|
||||||
|
val ILOOKUP_def = Define `ILOOKUP (a : 'a array) (k : bool list) = NONE:'a option`
|
||||||
|
val IREMOVE_def = Define `IREMOVE (a : 'a array) (k : bool list) = a:'a array`
|
||||||
|
|
||||||
|
|
||||||
|
(* With these, we can define the lifted operations *)
|
||||||
|
val LOOKUP_def = Define `LOOKUP a n = ILOOKUP a (num2arrayIndex n)`
|
||||||
|
val UPDATE_def = Define `UPDATE v a n = IUPDATE v a (num2arrayIndex n)`
|
||||||
|
val REMOVE_def = Define `REMOVE a n = IREMOVE a (num2arrayIndex n)`
|
||||||
|
|
||||||
|
|
||||||
|
(* TODO: show a few properties *)
|
||||||
|
val LOOKUP_EMPTY = store_thm ("LOOKUP_EMPTY",
|
||||||
|
``!k. LOOKUP EMPTY_ARRAY k = NONE``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
val LOOKUP_UPDATE = store_thm ("LOOKUP_UPDATE",
|
||||||
|
``!v n n' a. LOOKUP (UPDATE v a n) n' =
|
||||||
|
(if (n = n') then SOME v else LOOKUP a n')``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
val LOOKUP_REMOVE = store_thm ("LOOKUP_REMOVE",
|
||||||
|
``!n n' a. LOOKUP (REMOVE a n) n' =
|
||||||
|
(if (n = n') then NONE else LOOKUP a n')``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
val UPDATE_REMOVE_EQ = store_thm ("UPDATE_REMOVE_EQ", ``
|
||||||
|
(!v1 v2 n a. UPDATE v1 (UPDATE v2 a n) n = UPDATE v1 a n) /\
|
||||||
|
(!v n a. UPDATE v (REMOVE a n) n = UPDATE v a n) /\
|
||||||
|
(!v n a. REMOVE (UPDATE v a n) n = REMOVE a n)
|
||||||
|
``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
val UPDATE_REMOVE_NEQ = store_thm ("UPDATE_REMOVE_NEQ", ``
|
||||||
|
(!v1 v2 a n1 n2. n1 <> n2 ==>
|
||||||
|
((UPDATE v1 (UPDATE v2 a n2) n1) = (UPDATE v2 (UPDATE v1 a n1) n2))) /\
|
||||||
|
(!v a n1 n2. n1 <> n2 ==>
|
||||||
|
((UPDATE v (REMOVE a n2) n1) = (REMOVE (UPDATE v a n1) n2))) /\
|
||||||
|
(!a n1 n2. n1 <> n2 ==>
|
||||||
|
((REMOVE (REMOVE a n2) n1) = (REMOVE (REMOVE a n1) n2)))``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
val _ = export_theory();
|
174
exercises/e4.tex
Normal file
174
exercises/e4.tex
Normal file
@ -0,0 +1,174 @@
|
|||||||
|
\documentclass[a4paper,10pt,oneside]{scrartcl}
|
||||||
|
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
\usepackage[a4paper]{geometry}
|
||||||
|
\usepackage{hyperref}
|
||||||
|
\usepackage{url}
|
||||||
|
\usepackage{color}
|
||||||
|
\usepackage{amsfonts}
|
||||||
|
\usepackage{graphicx}
|
||||||
|
|
||||||
|
\input{../hol_commands.inc}
|
||||||
|
|
||||||
|
\title{Exercise 4}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\usekomafont{sectioning}\usekomafont{part}ITP Exercise 4
|
||||||
|
\webversion{}{\\\small{due Wednesday 24th May}}
|
||||||
|
\end{center}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
|
||||||
|
\section{Functional Arrays}
|
||||||
|
|
||||||
|
Arrays are not easily available in pure functional programs. However,
|
||||||
|
one can easily encode finite maps with natural numbers as keys. One
|
||||||
|
such finite map implementation is known as \emph{functional arrays}.
|
||||||
|
Functional arrays are binary trees that use the binary representation
|
||||||
|
of the key to determine the position in the tree. As a result, the
|
||||||
|
trees are always balanced. This is illustrated by the following picture:
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\includegraphics[width=14cm]{func_array.eps}
|
||||||
|
\end{center}
|
||||||
|
|
||||||
|
The nodes in this tree are annotated with the both the decimal and the binary representation of
|
||||||
|
their keys. The root node is for key 1, key 0 is not allowed. All values in
|
||||||
|
the right subtree are even, all in the left odd. This means that the last digit is
|
||||||
|
always 0 or always 1. We then continue with this scheme recursively. At level 2 we look at the
|
||||||
|
second bit, level 3 looks at the third bit and so on.
|
||||||
|
|
||||||
|
Navigating to the node for key $k$ can easily be implemented recursively. We check whether
|
||||||
|
$k$ is 1. If so, we look at the root node of our tree. Otherwise, we check whether $k$ is even.
|
||||||
|
If it is, we search for key $k\ \textit{DIV}\ 2$ in the right subtree, otherwise we look for
|
||||||
|
$k\ \textit{DIV}\ 2$ in the left subtree. Another way of describing the procedure is that we
|
||||||
|
always look at the last bit. If we see 0, we go to the right subtree, if we see 1 we go to the left subtree. We then throw the last bit away and continue. Once we reach the number 1, we stop.
|
||||||
|
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
Let's implement functional arrays in HOL. Please use the file
|
||||||
|
\texttt{e4\_arraysScript.sml} for this purpose. It contains auxiliary
|
||||||
|
definitions and some outline. Please read all of the exercise sheet
|
||||||
|
(except perhaps the hints section) before you start working.
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Datatype}
|
||||||
|
|
||||||
|
Define a datatype for functional arrays. This should be a binary tree type with leafs and nodes.
|
||||||
|
Leafs don't store any information. Each node should have a left and a right subtree as well as perhaps a value. Some keys might have a value stored, other not. So, please use an option type here.
|
||||||
|
|
||||||
|
In C one would have nodes with a left- and right subtree pointer. NULL values in these pointers would indicate that we don't have a subtree. This role of NULL pointers is in our functional implementation taken by leafs. Notice that leafs are not shown in the picture above. Each node in the last row above has implicitly a left and a right subtree, which are leafs.
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Implement Basic Operations}
|
||||||
|
|
||||||
|
Let's now implement a lookup, an update, a remove operation as well as a constant for empty arrays.
|
||||||
|
Checking the bits of the key directly in the recursive definitions of these operations is fiddly.
|
||||||
|
One has to reason about arithmetic a lot and deal with some awkward termination conditions.
|
||||||
|
Therefore, I took already care for you of this fiddly part. The function \hol{num2arrayIndex} takes
|
||||||
|
a key and returns an array index. An array index is encoded as a list of booleans. If the list is empty, we should stop at the current node. If it starts with \texttt{F} we should look at the left subtree, if it is \texttt{T} at the right one. There is also an inverse operation \hol{arrayIndex2num} as well as a few lemmata. Notice, that \hol{num2arrayIndex} adds implicitly 1 to the number before
|
||||||
|
looking at the bits. Thus, we can handle 0 and don't need a special case.
|
||||||
|
|
||||||
|
\subsubsection{\texttt{EMPTY\_ARRAY}}
|
||||||
|
|
||||||
|
Define a constant \texttt{EMPTY\_ARRAY} that represents an array, which has no values stored in it at all.
|
||||||
|
|
||||||
|
|
||||||
|
\subsubsection{\texttt{UPDATE} and \texttt{REMOVE}}
|
||||||
|
|
||||||
|
Define an update function. Start with defining a function \texttt{IUPDATE v a idx} that updates array \texttt{a} to contain \texttt{v} for index \texttt{idx}. It should return the updated array. Then use the definition of \texttt{UPDATE} already present in the theory to lift this definition to keys that are natural numbers.
|
||||||
|
Similarish define a function \texttt{REMOVE a idx} that removes the value stored for index \texttt{idx} from array \texttt{a}. No value should be stored for this index in the resulting array.
|
||||||
|
|
||||||
|
The remove and update functions are very similar. It is beneficial to define a generalised update function that takes an optional value argument. If a value is provided, the current value is updated with it. If no value is provided, the current one is removed.
|
||||||
|
|
||||||
|
\subsection{Test your definition}
|
||||||
|
|
||||||
|
Use \texttt{EVAL} to test whether your definitions work as expected.
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Visualise Trees}
|
||||||
|
|
||||||
|
To install HOL, you needed to install the graphviz tool
|
||||||
|
(\url{http://www.graphviz.org/}). Let's use this tool to visualise your trees.
|
||||||
|
I already coded an auxiliary library to communicate with graphviz. You need to
|
||||||
|
write syntax functions for your array type. I recommend having a look at the implementation
|
||||||
|
of an existing syntax library like \texttt{optionSyntax}. Then code functions
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{is\_array\_leaf :\ term -> bool}
|
||||||
|
\item \texttt{is\_array\_node :\ term -> bool}
|
||||||
|
\item \texttt{dest\_array\_node :\ term -> (term * term option * term)}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
Use the provided library \texttt{dot\_graphLib} to visualise your
|
||||||
|
arrays. Some example code is provided in
|
||||||
|
\texttt{e4\_arraysLib.sml}. Familiarise yourself with this code. The
|
||||||
|
example array \texttt{a2} should have for $n < 10$ the value $n$
|
||||||
|
stored at key $3*n$ all other keys have no value stored. When you
|
||||||
|
visualise \texttt{a2} the result should look as follows:
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\includegraphics[width=14cm]{a2.eps}
|
||||||
|
\end{center}
|
||||||
|
|
||||||
|
This is a good test of whether your \texttt{UPDATE} works as expected. Feel free however, to
|
||||||
|
delay this exercise till after the proofs, if you prefer that.
|
||||||
|
|
||||||
|
|
||||||
|
\subsubsection{\texttt{LOOKUP}}
|
||||||
|
|
||||||
|
Define a lookup function. Similar to \texttt{UPDATE} define \texttt{ILOOKUP} on indexes first and
|
||||||
|
then lift it to numbers. \texttt{LOOKUP a k} should return \texttt{SOME v} iff value \texttt{v} is
|
||||||
|
stored for key \texttt{k} and \texttt{NONE} if no value is stored.
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Test your definition}
|
||||||
|
|
||||||
|
Use \texttt{EVAL} to test whether your definition of \texttt{LOOKUP} works as expected.
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Basic Properties}
|
||||||
|
|
||||||
|
Show that you indeed implemented a finite map datastructure. For this purpose fill in the missing
|
||||||
|
proofs in \texttt{e4\_arraysScript.sml}. Make sure that the resulting theory compiles and can be loaded by \texttt{e4\_arraysLib.sml}.
|
||||||
|
|
||||||
|
You should prove the following properties:
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item \texttt{!k.\ LOOKUP EMPTY\_ARRAY k = NONE}
|
||||||
|
\item \texttt{!v n n' a.\ LOOKUP (UPDATE v a n) n' =\\
|
||||||
|
\-\qquad (if (n = n') then SOME v else LOOKUP a n')}
|
||||||
|
\item \texttt{!n n' a.\ LOOKUP (REMOVE a n) n' = (if (n = n') then NONE else LOOKUP a n')}
|
||||||
|
\item \texttt{!v1 v2 n a.\ UPDATE v1 (UPDATE v2 a n) n = UPDATE v1 a n}
|
||||||
|
\item \texttt{!v n a.\ UPDATE v (REMOVE a n) n = UPDATE v a n}
|
||||||
|
\item \texttt{!v n a.\ REMOVE (UPDATE v a n) n = REMOVE a n}
|
||||||
|
\item \texttt{!v1 v2 a n1 n2.\ n1 <> n2 ==>\\\-\qquad
|
||||||
|
((UPDATE v1 (UPDATE v2 a n2) n1) = (UPDATE v2 (UPDATE v1 a n1) n2))}
|
||||||
|
\item \texttt{!v a n1 n2.\ n1 <> n2 ==>\\\-\qquad
|
||||||
|
((UPDATE v (REMOVE a n2) n1) = (REMOVE (UPDATE v a n1) n2))}
|
||||||
|
\item \texttt{!a n1 n2.\ n1 <> n2 ==>\\\-\qquad
|
||||||
|
((REMOVE (REMOVE a n2) n1) = (REMOVE (REMOVE a n1) n2))}
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
\section{Hints}
|
||||||
|
|
||||||
|
If you perform your proofs naively, you need a lot of case-splits and everything gets
|
||||||
|
very lengthy. It is beneficial to use many auxiliary definitions and use them and many tiny
|
||||||
|
lemmata about them to avoid case-splits. It might for example be beneficial to introduce auxiliary
|
||||||
|
functions \texttt{VAL\_OF\_ROOT} and \texttt{GEN\_GET\_SUBARRAY} and derive the following properties:
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{!a. ILOOKUP a [] = VAL\_OF\_ROOT a}
|
||||||
|
\item \texttt{!a i idx.\ ILOOKUP a (i::idx) = ILOOKUP (GEN\_GET\_SUBARRAY i a) idx}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: t
|
||||||
|
%%% End:
|
11
exercises/e5-material/e5-hints.txt
Normal file
11
exercises/e5-material/e5-hints.txt
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
val IS_WEAK_SUBLIST_REC_def = Define `
|
||||||
|
(IS_WEAK_SUBLIST_REC (l1 : 'a list) ([]:'a list) = T) /\
|
||||||
|
(IS_WEAK_SUBLIST_REC [] (_::_) = F) /\
|
||||||
|
(IS_WEAK_SUBLIST_REC (y::ys) (x::xs) = (
|
||||||
|
(x = y) /\ IS_WEAK_SUBLIST_REC ys xs) \/ (IS_WEAK_SUBLIST_REC ys (x::xs)))`;
|
||||||
|
|
||||||
|
val FILTER_BY_BOOLS_def = Define `
|
||||||
|
FILTER_BY_BOOLS bl l = MAP SND (FILTER FST (ZIP (bl, l)))`
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_def = Define `IS_WEAK_SUBLIST_FILTER l1 l2 =
|
||||||
|
?(bl : bool list). (LENGTH bl = LENGTH l1) /\ (l2 = FILTER_BY_BOOLS bl l1)`
|
185
exercises/e5-material/e5Script.sml
Normal file
185
exercises/e5-material/e5Script.sml
Normal file
@ -0,0 +1,185 @@
|
|||||||
|
open HolKernel Parse boolLib bossLib;
|
||||||
|
|
||||||
|
val _ = new_theory "e5";
|
||||||
|
|
||||||
|
open listTheory rich_listTheory arithmeticTheory
|
||||||
|
|
||||||
|
(**************)
|
||||||
|
(* Question 1 *)
|
||||||
|
(**************)
|
||||||
|
|
||||||
|
(*--- 1.1 --- *)
|
||||||
|
|
||||||
|
(* TODO: Fill in a proper definition *)
|
||||||
|
val IS_WEAK_SUBLIST_REC_def = Define `IS_WEAK_SUBLIST_REC (l1 : 'a list) (l2 : 'a list) = T`
|
||||||
|
|
||||||
|
|
||||||
|
(* Some tests *)
|
||||||
|
val test1 = EVAL ``IS_WEAK_SUBLIST_REC [1;2;3;4;5;6;7] [2;5;6]``;
|
||||||
|
val test2 = EVAL ``IS_WEAK_SUBLIST_REC [1;2;3;4;5;6;7] [2;6;5]``;
|
||||||
|
val test3 = EVAL ``IS_WEAK_SUBLIST_REC [1;2;3;4;5;6;7] [2;5;6;8]``;
|
||||||
|
|
||||||
|
(* TODO: at least 2 sanity check lemmata *)
|
||||||
|
|
||||||
|
|
||||||
|
(*--- 1.2 --- *)
|
||||||
|
|
||||||
|
(* TODO: fill in Definition *)
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_def = Define `IS_WEAK_SUBLIST_FILTER (l1 : 'a list) (l2 : 'a list) = T`
|
||||||
|
|
||||||
|
(* TODO: at least 2 sanity check lemmata *)
|
||||||
|
|
||||||
|
|
||||||
|
(*--- 1.3 --- *)
|
||||||
|
|
||||||
|
(* TODO: Prove of auxiliary lemmata *)
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_EQUIV = store_thm ("IS_WEAK_SUBLIST_EQUIV",
|
||||||
|
``IS_WEAK_SUBLIST_REC = IS_WEAK_SUBLIST_FILTER``,
|
||||||
|
|
||||||
|
REWRITE_TAC[FUN_EQ_THM] >>
|
||||||
|
CONV_TAC (RENAME_VARS_CONV ["l1", "l2"]) >>
|
||||||
|
cheat)
|
||||||
|
|
||||||
|
|
||||||
|
(*--- 1.4 --- *)
|
||||||
|
|
||||||
|
(* TODO: Prove of auxiliary lemmata and perhaps reorder lemmata below *)
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_REC_APPEND_EXTEND_LEFT = store_thm ("IS_WEAK_SUBLIST_REC_APPEND_EXTEND_LEFT",
|
||||||
|
``!l1a l1 l1b l2. IS_WEAK_SUBLIST_REC l1 l2 ==> IS_WEAK_SUBLIST_REC (l1a ++ l1 ++ l1b) l2``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_REC_APPEND = store_thm ("IS_WEAK_SUBLIST_REC_APPEND_EXTEND_LEFT",
|
||||||
|
``!l1a l1b l2a l2b. IS_WEAK_SUBLIST_REC l1a l2a ==>
|
||||||
|
IS_WEAK_SUBLIST_REC l1b l2b ==>
|
||||||
|
IS_WEAK_SUBLIST_REC (l1a ++ l1b) (l2a ++ l2b)``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_REC_REFL = store_thm ("IS_WEAK_SUBLIST_REC_REFL",
|
||||||
|
``!l. IS_WEAK_SUBLIST_REC l l``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_REC_TRANS = store_thm ("IS_WEAK_SUBLIST_REC_TRANS",
|
||||||
|
``!l1 l2 l3. IS_WEAK_SUBLIST_REC l1 l2 ==>
|
||||||
|
IS_WEAK_SUBLIST_REC l2 l3 ==>
|
||||||
|
IS_WEAK_SUBLIST_REC l1 l3``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_REC_ANTISYM = store_thm ("IS_WEAK_SUBLIST_REC_ANTISYM",
|
||||||
|
``!l1 l2. IS_WEAK_SUBLIST_REC l1 l2 ==>
|
||||||
|
IS_WEAK_SUBLIST_REC l2 l1 ==>
|
||||||
|
(l1 = l2)``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_APPEND_EXTEND_LEFT = store_thm ("IS_WEAK_SUBLIST_FILTER_APPEND_EXTEND_LEFT",
|
||||||
|
``!l1a l1 l1b l2. IS_WEAK_SUBLIST_FILTER l1 l2 ==> IS_WEAK_SUBLIST_FILTER (l1a ++ l1 ++ l1b) l2``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_APPEND = store_thm ("IS_WEAK_SUBLIST_FILTER_APPEND_EXTEND_LEFT",
|
||||||
|
``!l1a l1b l2a l2b. IS_WEAK_SUBLIST_FILTER l1a l2a ==>
|
||||||
|
IS_WEAK_SUBLIST_FILTER l1b l2b ==>
|
||||||
|
IS_WEAK_SUBLIST_FILTER (l1a ++ l1b) (l2a ++ l2b)``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_REFL = store_thm ("IS_WEAK_SUBLIST_FILTER_REFL",
|
||||||
|
``!l. IS_WEAK_SUBLIST_FILTER l l``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_TRANS = store_thm ("IS_WEAK_SUBLIST_FILTER_TRANS",
|
||||||
|
``!l1 l2 l3. IS_WEAK_SUBLIST_FILTER l1 l2 ==>
|
||||||
|
IS_WEAK_SUBLIST_FILTER l2 l3 ==>
|
||||||
|
IS_WEAK_SUBLIST_FILTER l1 l3``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_ANTISYM = store_thm ("IS_WEAK_SUBLIST_FILTER_ANTISYM",
|
||||||
|
``!l1 l2. IS_WEAK_SUBLIST_FILTER l1 l2 ==>
|
||||||
|
IS_WEAK_SUBLIST_FILTER l2 l1 ==>
|
||||||
|
(l1 = l2)``,
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
(**************)
|
||||||
|
(* Question 2 *)
|
||||||
|
(**************)
|
||||||
|
|
||||||
|
val sh_true_def = Define `sh_true = T`;
|
||||||
|
val sh_var_def = Define `sh_var (v:bool) = v`;
|
||||||
|
val sh_not_def = Define `sh_not b = ~b`;
|
||||||
|
val sh_and_def = Define `sh_and b1 b2 = (b1 /\ b2)`;
|
||||||
|
val sh_or_def = Define `sh_or b1 b2 = (b1 \/ b2)`;
|
||||||
|
val sh_implies_def = Define `sh_implies b1 b2 = (b1 ==> b2)`;
|
||||||
|
|
||||||
|
|
||||||
|
val _ = Datatype `bvar = BVar num`
|
||||||
|
val _ = Datatype `prop = d_true | d_var bvar | d_not prop
|
||||||
|
| d_and prop prop | d_or prop prop
|
||||||
|
| d_implies prop prop`;
|
||||||
|
|
||||||
|
val _ = Datatype `var_assignment = BAssign (bvar -> bool)`
|
||||||
|
val VAR_VALUE_def = Define `VAR_VALUE (BAssign a) v = (a v)`
|
||||||
|
|
||||||
|
val PROP_SEM_def = Define `
|
||||||
|
(PROP_SEM a d_true = T) /\
|
||||||
|
(PROP_SEM a (d_var v) = VAR_VALUE a v) /\
|
||||||
|
(PROP_SEM a (d_not p) = ~(PROP_SEM a p)) /\
|
||||||
|
(PROP_SEM a (d_and p1 p2) = (PROP_SEM a p1 /\ PROP_SEM a p2)) /\
|
||||||
|
(PROP_SEM a (d_or p1 p2) = (PROP_SEM a p1 \/ PROP_SEM a p2)) /\
|
||||||
|
(PROP_SEM a (d_implies p1 p2) = (PROP_SEM a p1 ==> PROP_SEM a p2))`
|
||||||
|
|
||||||
|
|
||||||
|
(* TODO Work on question 2 *)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
(**************)
|
||||||
|
(* Question 3 *)
|
||||||
|
(**************)
|
||||||
|
|
||||||
|
val expunge_def =
|
||||||
|
Define
|
||||||
|
`(expunge x [] = [])
|
||||||
|
/\ (expunge x (h::t) = if x=h then expunge x t else h::expunge x t)`;
|
||||||
|
|
||||||
|
val min_def =
|
||||||
|
Define
|
||||||
|
`(min [] m = m)
|
||||||
|
/\ (min (h::t) m = if m <= h then min t m else min t h)`;
|
||||||
|
|
||||||
|
val minsort_defn =
|
||||||
|
Hol_defn "minsort"
|
||||||
|
`(minsort [] = [])
|
||||||
|
/\ (minsort (h::t) =
|
||||||
|
let m = min t h
|
||||||
|
in
|
||||||
|
m::minsort (expunge m (h::t)))`;
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
(* TODO: prove some auxiliary lemmata and fill in termination proof *)
|
||||||
|
|
||||||
|
(* For interactive use
|
||||||
|
|
||||||
|
Defn.tgoal minsort_defn
|
||||||
|
|
||||||
|
*)
|
||||||
|
|
||||||
|
val (minsort_def, minsort_ind) = Defn.tprove (minsort_defn,
|
||||||
|
WF_REL_TAC `TODO` >>
|
||||||
|
cheat);
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
val _ = export_theory();
|
184
exercises/e5.tex
Normal file
184
exercises/e5.tex
Normal file
@ -0,0 +1,184 @@
|
|||||||
|
\documentclass[a4paper,10pt,oneside]{scrartcl}
|
||||||
|
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
\usepackage[a4paper]{geometry}
|
||||||
|
\usepackage{hyperref}
|
||||||
|
\usepackage{url}
|
||||||
|
\usepackage{color}
|
||||||
|
\usepackage{amsfonts}
|
||||||
|
\usepackage{graphicx}
|
||||||
|
|
||||||
|
\input{../hol_commands.inc}
|
||||||
|
|
||||||
|
\title{Exercise 5}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\usekomafont{sectioning}\usekomafont{part}ITP Exercise 5
|
||||||
|
\webversion{}{\\\small{due Friday 26th May (except 1.3 and 1.4)\\ 1.3 and 1.4 due Friday 2nd June}}
|
||||||
|
\end{center}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\section{Multiple Definitions / Formal Sanity}
|
||||||
|
|
||||||
|
\ml{rich\_listTheory} provides a predicate \hol{IS\_SUBLIST}. It checks whether
|
||||||
|
a list appears somewhere as part of another list:
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\begin{verbatim}
|
||||||
|
|- !l1 l2. IS_SUBLIST l1 l2 <=> ?l l'. l1 = l ++ (l2 ++ l')
|
||||||
|
\end{verbatim}
|
||||||
|
\end{center}
|
||||||
|
|
||||||
|
Define a weaker version of such a predicate called \hol{IS\_WEAK\_SUBLIST} that allows additional elements between
|
||||||
|
the elements of \texttt{l2}. So, for example \hol{IS\_WEAK\_SUBLIST [1;2;3;4;5;6;7] [2;5;6]} should hold. In contrast the statements \hol{IS\_WEAK\_SUBLIST [1;2;3;4;5;6;7] [2;6;5]} or \hol{IS\_WEAK\_SUBLIST [1;2;3;4;5;6;7] [2;5;6;8]} do not hold. Another way of describing the semantics of \hol{IS\_WEAK\_SUBLIST l1 l2} is saying that one can get \texttt{l2} by removing elements from \hol{l1} while keeping the order.
|
||||||
|
|
||||||
|
\subsection{Recursive Definition}
|
||||||
|
|
||||||
|
Define \hol{IS\_WEAK\_SUBLIST} recursively using \hol{Define}. Name your function \hol{IS\_WEAK\_SUBLIST\_REC}. Test this definition via \hol{EVAL} and prove at least 2 sanity check lemmata, which do not coincide with the lemmata you are asked to show below.
|
||||||
|
|
||||||
|
\subsection{Filter Definition}
|
||||||
|
|
||||||
|
Define a version of \hol{IS\_WEAK\_SUBLIST} called \hol{IS\_WEAK\_SUBLIST\_FILTER} using the existing list function \hol{FILTER}. You might want to use \hol{ZIP}, \hol{MAP}, \hol{FST} and \hol{SND} as well. The idea is to check for the existence of a list of booleans of the same length as \hol{l1}, zip this list with \hol{l1} and filter. You probably want to introduce auxiliary definitions before defining \hol{IS\_WEAK\_SUBLIST\_FILTER}.
|
||||||
|
|
||||||
|
The resulting definition is not executable via \hol{EVAL}. Anyhow, show at least 2 sanity check lemmata, which do not coincide with the lemmata you are asked to show below.
|
||||||
|
|
||||||
|
\subsection{Equivalence Proof}
|
||||||
|
|
||||||
|
Show \hol{IS\_WEAK\_SUBLIST\_REC = IS\_WEAK\_SUBLIST\_FILTER}. You might want to prove various auxiliary lemmata first. You might want to use among other things \hol{FUN\_EQ\_THM} and the list function \texttt{REPLICATE}.
|
||||||
|
|
||||||
|
\subsection{Properties}
|
||||||
|
|
||||||
|
Show the following properties of \texttt{IS\_WEAK\_SUBLIST\_REC} and \texttt{IS\_WEAK\_SUBLIST\_FILTER}. This means that for each property stated below in terms of \texttt{IS\_WEAK\_SUBLIST} you should prove one lemma using \texttt{IS\_WEAK\_SUBLIST\_REC} and another lemma using \texttt{IS\_WEAK\_SUBLIST\_FILTER}. Don't use the fact that both functions are equal. The point of this exercise is partly to demonstrate the impact of different definitions on proofs. You might of course use previously proved lemmata to prove additional ones.
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item \hol{!l1a l1 l1b l2.\ IS\_WEAK\_SUBLIST l1 l2 ==>\\
|
||||||
|
\-\qquad IS\_WEAK\_SUBLIST (l1a ++ l1 ++ l1b) l2}
|
||||||
|
\item \hol{!l1a l1b l2a l2b.\ IS\_WEAK\_SUBLIST l1a l2a ==> IS\_WEAK\_SUBLIST l1b l2b ==>\\
|
||||||
|
\-\qquad IS\_WEAK\_SUBLIST (l1a ++ l1b) (l2a ++ l2b)}
|
||||||
|
\item \hol{!l.\ IS\_WEAK\_SUBLIST l l}
|
||||||
|
\item \hol{!l1 l2 l3.\ IS\_WEAK\_SUBLIST l1 l2 ==> IS\_WEAK\_SUBLIST l2 l3 ==>\\
|
||||||
|
\-\qquad IS\_WEAK\_SUBLIST l1 l3}
|
||||||
|
\item \hol{!l1 l2.\ IS\_WEAK\_SUBLIST l1 l2 ==> IS\_WEAK\_SUBLIST l2 l1 ==> (l1 = l2)}
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
\section{Deep and Shallow Embeddings}
|
||||||
|
|
||||||
|
As seen in the lecture let's define a deep and a shallow embedding of
|
||||||
|
propositional logic. Use the names and definitions from the lecture
|
||||||
|
notes. Add a definition stating that two propositional formulas are
|
||||||
|
equivalent, iff their semantics coincides for all variable
|
||||||
|
assignments, \ie
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\verb#PROP_IS_EQUIV p1 p2 <=> (!a. PROP_SEM a p1 = PROP_SEM a p2)#
|
||||||
|
\end{center}
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Syntax for propositional formulas}
|
||||||
|
|
||||||
|
Define in SML syntax functions for all shallowly embedded propositional formulas.
|
||||||
|
Define for each constructor a make - function, a destructor and a check.
|
||||||
|
For \hol{sh\_and} I would like to have for example
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{mk\_sh\_and :\ term -> term -> term},
|
||||||
|
\item \hol{dest\_sh\_and :\ term -> (term * term)} and
|
||||||
|
\item \hol{is\_sh\_and :\ term -> bool}.
|
||||||
|
\end{itemize}
|
||||||
|
Define a check \hol{is\_sh\_prop :\ term -> bool} that checks whether
|
||||||
|
a term is a shallowly embedded propositional formula.
|
||||||
|
|
||||||
|
\subsection{Getting Rid of Conjunction and Implication}
|
||||||
|
|
||||||
|
Define a function in HOL \hol{PROP\_CONTAINS\_NO\_AND\_IMPL : prop -> bool} that checks whether a propositional
|
||||||
|
formula contains no conjunction and implication operators. Define a similar function \hol{sh\_prop\_contains\_no\_and\_impl} in SML that checks the same property for shallowly embedded formulas.
|
||||||
|
|
||||||
|
Define a function \hol{PROP\_REMOVE\_AND\_IMPL} in HOL that removes all conjunctions and implications
|
||||||
|
from a propositional formula and returns an equivalent one. Prove these properties, \ie prove
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \verb#!p. PROP_IS_EQUIV (PROP_REMOVE_AND_IMPL p) p#
|
||||||
|
\item \verb#!p. PROP_CONTAINS_NO_AND_IMPL (PROP_REMOVE_AND_IMPL p)#
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
Implement a similar function \hol{sh\_prop\_remove\_and\_impl :\ term -> thm} in SML that performs the same operation on the shallow embedding and returns a theorem stating that the input term is equal to a version without conjunctions and implications. The SML version is allowed to fail, if the input term does not satisfy \hol{is\_sh\_prop}.
|
||||||
|
|
||||||
|
Notice, that \hol{PROP\_REMOVE\_AND\_IMPL} is a verified function, whereas
|
||||||
|
\hol{sh\_prop\_remove\_and\_impl} is a verifying one.
|
||||||
|
|
||||||
|
|
||||||
|
\section{Fancy Function Definitions}
|
||||||
|
|
||||||
|
In the lecture the termination proof for quicksort was briefly discussed.
|
||||||
|
As an exercise, let's define \hol{minsort}. This function \hol{minsort} sorts
|
||||||
|
a list of natural numbers, by always searching a minimal element of the list,
|
||||||
|
put it in front of the list a recursively sort the rest of this list. In HOL,
|
||||||
|
it can be defined as
|
||||||
|
|
||||||
|
\begin{verbatim}
|
||||||
|
val expunge_def =
|
||||||
|
Define
|
||||||
|
`(expunge x [] = [])
|
||||||
|
/\ (expunge x (h::t) = if x=h then expunge x t else h::expunge x t)`;
|
||||||
|
|
||||||
|
val min_def =
|
||||||
|
Define
|
||||||
|
`(min [] m = m)
|
||||||
|
/\ (min (h::t) m = if m <= h then min t m else min t h)`;
|
||||||
|
|
||||||
|
val minsort_defn =
|
||||||
|
Hol_defn "minsort"
|
||||||
|
`(minsort [] = [])
|
||||||
|
/\ (minsort (h::t) =
|
||||||
|
let m = min t h
|
||||||
|
in
|
||||||
|
m::minsort (expunge m (h::t)))`;
|
||||||
|
\end{verbatim}
|
||||||
|
|
||||||
|
Notice, that TFL (\ie \hol{Define}) is not able to show automatically
|
||||||
|
that \hol{minsort} is terminating. You need to do this manually. Show
|
||||||
|
auxiliary lemmata about \hol{min} and \hol{expunge} and use them with
|
||||||
|
\hol{Defn.tprove} (and \hol{Defn.tgoal}) to show that \hol{minsort}
|
||||||
|
terminates.
|
||||||
|
|
||||||
|
\clearpage
|
||||||
|
\section{Hints}
|
||||||
|
|
||||||
|
\subsection{Definition of \hol{IS\_WEAK\_SUBLIST}}
|
||||||
|
|
||||||
|
\hol{IS\_WEAK\_SUBLIST\_REC} and \hol{IS\_WEAK\_SUBLIST\_FILTER} can be defined by
|
||||||
|
|
||||||
|
\begin{verbatim}
|
||||||
|
val IS_WEAK_SUBLIST_REC_def = Define `
|
||||||
|
(IS_WEAK_SUBLIST_REC (l1 : 'a list) ([]:'a list) = T) /\
|
||||||
|
(IS_WEAK_SUBLIST_REC [] (_::_) = F) /\
|
||||||
|
(IS_WEAK_SUBLIST_REC (y::ys) (x::xs) = (
|
||||||
|
(x = y) /\ IS_WEAK_SUBLIST_REC ys xs) \/ (IS_WEAK_SUBLIST_REC ys (x::xs)))`;
|
||||||
|
|
||||||
|
val FILTER_BY_BOOLS_def = Define `
|
||||||
|
FILTER_BY_BOOLS bl l = MAP SND (FILTER FST (ZIP (bl, l)))`
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_def = Define `IS_WEAK_SUBLIST_FILTER l1 l2 =
|
||||||
|
?(bl : bool list). (LENGTH bl = LENGTH l1) /\ (l2 = FILTER_BY_BOOLS bl l1)`
|
||||||
|
\end{verbatim}
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Termination of \hol{minsort}}
|
||||||
|
|
||||||
|
\hol{minsort} is an example of the TFL library. You can find a termination proof in the HOL sources. However, really try to prove termination yourself first. Before you start looking up the proof, here a few hints:
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item The main idea is that the length of \hol{expunge m (h::t)} is shorter than the length of \hol{h::t}, \ie start your termination proof with \hol{WF\_REL\_TAC LENGTH}.
|
||||||
|
\item show the lemma \hol{!x xs.\ LENGTH (expunge x xs) <= LENGTH xs}
|
||||||
|
\item show the lemma \hol{!x xs.\ MEM x xs ==> LENGTH (expunge x xs) < LENGTH xs}
|
||||||
|
\item show the lemma \hol{!x xs.\ MEM (min xs x) (x::xs)}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: t
|
||||||
|
%%% End:
|
125
exercises/e6.tex
Normal file
125
exercises/e6.tex
Normal file
@ -0,0 +1,125 @@
|
|||||||
|
\documentclass[a4paper,10pt,oneside]{scrartcl}
|
||||||
|
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
\usepackage[a4paper]{geometry}
|
||||||
|
\usepackage{hyperref}
|
||||||
|
\usepackage{url}
|
||||||
|
\usepackage{color}
|
||||||
|
\usepackage{amsfonts}
|
||||||
|
\usepackage{graphicx}
|
||||||
|
|
||||||
|
\input{../hol_commands.inc}
|
||||||
|
|
||||||
|
\title{Exercise 6}
|
||||||
|
\def\ttwebflag{}
|
||||||
|
\begin{document}
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\usekomafont{sectioning}\usekomafont{part}ITP Exercise 6
|
||||||
|
\webversion{}{\\\small{due Friday 2nd June}}
|
||||||
|
\end{center}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\webversion{}{
|
||||||
|
\section{Exercise 5}
|
||||||
|
|
||||||
|
Please finish question 1.3 and 1.4 from exercise 5. You might find the simplifier helpful.
|
||||||
|
|
||||||
|
|
||||||
|
\section{Final Project}
|
||||||
|
|
||||||
|
We have 3 more weeks to go on the lecture. For the last 3 weeks, I would like to set a small project to solve for everyone. There will be a default project (see below). However, if you want you can also propose your own project. This has to be approved by me, before you can start working on it. It should satisfy the following requirements:
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item You should build a formal model of some description. This description can \eg be a natural language text or some computer program.
|
||||||
|
\item You should test your model against the description / implementation.
|
||||||
|
\item You should prove some interesting properties.
|
||||||
|
\item It should be do-able in a reasonable amount of time (ideally 3 weeks). You have to either convince me that it is doable in 3 weeks or worth both your and my additional time.
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
Please read the default project proposal below. Either decide to do this project or think of an alternative final project. In any case, discuss your choice with me. \webversion{}{\textbf{Whoever has not discussed a final project with me by June 2nd will be forced to do the default project.}}
|
||||||
|
|
||||||
|
\clearpage
|
||||||
|
}
|
||||||
|
|
||||||
|
\webversion{\section{Final Project}}{\section{Default Project - Regular Expressions in HOL}}
|
||||||
|
|
||||||
|
There is a fun paper on regular expressions: \emph{A Play on Regular
|
||||||
|
Expressions} by Sebastian Fischer, Frank Huch and Thomas Wilke
|
||||||
|
published as a functional pearl at ICFP 2010
|
||||||
|
(\url{http://www-ps.informatik.uni-kiel.de/~sebf/pub/regexp-play.html}).
|
||||||
|
In this paper an implementation of marked regular expressions in
|
||||||
|
Haskell is described. The task is to formalise the simple parts of
|
||||||
|
this work in HOL, verify the correctness of the implementation and
|
||||||
|
export trustworthy code into an SML library.
|
||||||
|
|
||||||
|
You should develop this project such that (in theory) it could be
|
||||||
|
added to the examples directory of HOL. Therefore, I want you to
|
||||||
|
create a git-repository for your project\webversion{}{ and give me access to it}. You
|
||||||
|
should create one or more HOL-theories that can be compiled by
|
||||||
|
Holmake. There will be multiple SML files as well. These should
|
||||||
|
compile decently and have a signature. Please provide a selftest for
|
||||||
|
your development. Write decent documentation. There should be a (very
|
||||||
|
short) \texttt{README} as well as sufficient comments in the code.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Basic Regular Expression Semantics}
|
||||||
|
|
||||||
|
Read Act 1, Scene 1. Implement the \texttt{Reg} datatype in HOL. Like later in the paper,
|
||||||
|
replace the type \texttt{Char} with a free type variable \texttt{'a}. The intention is to define
|
||||||
|
regular expressions on lists of type \texttt{'a}. Define a function \texttt{language\_of :\ 'a Reg -> ('a list) set} that returns the language accepted by a regular expression. The definition of
|
||||||
|
\texttt{language\_of} should be as clean and simple as possible. It does not need to be executable.
|
||||||
|
|
||||||
|
\subsection{Executable Semantics}
|
||||||
|
Now define the function \texttt{accept} in HOL. While doing so replace the type \texttt{String} with \texttt{'a list} to match the changes to \texttt{Reg}. You will need to implement the auxiliary functions \texttt{parts} and \texttt{split}. Test your definitions and apply formal sanity checks.
|
||||||
|
|
||||||
|
\subsection{Code Extraction and Conformance Testing}
|
||||||
|
|
||||||
|
Familiarise yourself with \texttt{EmitML}. Use it to extract your datatype \texttt{Reg} and the function \texttt{accept} to ML. Test \texttt{accept} against the regular expression implementation in \texttt{regexpMatch.sml} that comes with HOL.
|
||||||
|
|
||||||
|
\texttt{EmitML} has not been discussed in the lecture and is not well documented. Part of this challenge is to find information for yourself about HOL libraries and learn from examples and source code.
|
||||||
|
|
||||||
|
\subsection{Correctness Proof}
|
||||||
|
|
||||||
|
Prove that \texttt{accept} and \texttt{language\_of} agree with each other, \ie prove the statement \texttt{!r w.\ accept r w <=> w IN (language\_of r)}.
|
||||||
|
|
||||||
|
\subsection{Marked Regular Expressions}
|
||||||
|
|
||||||
|
Continue reading the paper. Act 1, Scene 2 is interesting, but we are here not interested in weights. Instead focus in Act 2, Scene 1. Implement a datatype for marked regular expressions called \texttt{MReg}. Use first the simple version with caching the values of \texttt{empty} and \texttt{final}. Provide a function \texttt{MARK\_REG :\ 'a Reg -> 'a MReg} that turns a regular expression into a marked expression without any marks set. Implement a function \texttt{acceptM :\ 'a MReg -> 'a list -> bool} following the idea of the \texttt{accept} function in the paper.
|
||||||
|
|
||||||
|
Test your definitions and perform formal sanity checks.
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Correctness Proof Marked Regular Expressions}
|
||||||
|
|
||||||
|
Show that \texttt{acceptM} is correct, \ie show
|
||||||
|
\texttt{!r w.\ acceptM (MARK\_REG r) w <=> w IN (language\_of r)}.
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Cached Marked Regular Expressions}
|
||||||
|
|
||||||
|
Now let's also implement the caching of \texttt{empty} and \texttt{final}. Call the resulting datatypes \texttt{CMReg}. It is tempting to define mutually recursive types \texttt{CMReg} and \texttt{CMRe} as in the paper. However, HOL's automation won't work well on such a type, so I advice manually encoding a cache (\ie adding extra boolean arguments to the constructors of \texttt{MReg}). Write a function \texttt{CACHE\_REG :\ 'a MReg -> 'a CMReg} that turns a marked regular expression into a cached marked one with valid caches. Implement a function \texttt{acceptCM :\ 'a CMReg -> 'a list -> bool} that is similar to \texttt{acceptM}, but more efficient due to using the caches.
|
||||||
|
|
||||||
|
Test your definitions and perform formal sanity checks. As part of formal sanity,
|
||||||
|
define a well-formedness predicate for cached marked regular expressions stating that the cached values for \texttt{empty} and \texttt{final} are correct.
|
||||||
|
Moreover, define the inverse function \texttt{UNCACHE\_REG :\ 'a CMReg -> 'a MReg} of \texttt{CACHE\_REG} and show that these functions are really inverses.
|
||||||
|
|
||||||
|
\subsection{Correctness Proof Caches}
|
||||||
|
|
||||||
|
Show that \texttt{acceptCM} is correct, \ie show
|
||||||
|
\texttt{!r w.\ acceptCM (CACHE\_REG (MARK\_REG r)) w <=> w IN (language\_of r)}.
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{SML Library}
|
||||||
|
|
||||||
|
Use \texttt{EmitML} to extract your code to SML. Provide an interface for regular expressions on strings. The interface should contain a type for regular expression on strings similar to \texttt{char Reg}. It should provide a function \texttt{match} that checks whether such a regular expression matches a given string. Build 4 instances of this interface, one with the regular expression library \texttt{regexpMatch.sml} and ones for \texttt{accept}, \texttt{acceptM} and \texttt{acceptCM}. Write some simple tests and run them against all these instantiations (\eg via a functor). Perform some simple performance measurements.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: t
|
||||||
|
%%% End:
|
70
exercises/e7.tex
Normal file
70
exercises/e7.tex
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
\documentclass[a4paper,10pt,oneside]{scrartcl}
|
||||||
|
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
\usepackage[a4paper]{geometry}
|
||||||
|
\usepackage{hyperref}
|
||||||
|
\usepackage{url}
|
||||||
|
\usepackage{color}
|
||||||
|
\usepackage{amsfonts}
|
||||||
|
\usepackage{graphicx}
|
||||||
|
|
||||||
|
\input{../hol_commands.inc}
|
||||||
|
|
||||||
|
\title{Exercise 7}
|
||||||
|
\begin{document}
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\usekomafont{sectioning}\usekomafont{part}ITP Exercise 7
|
||||||
|
\end{center}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\webversion{}{
|
||||||
|
\section{Final Project}
|
||||||
|
|
||||||
|
You should all have started working on your final project by now. The last lecture will take place Monday, 12th June. Practical sessions will continue till 23rd June. There will be an additional practical session on 19 June during the slot normally used for the lecture. The final project is at the very latest due on the 23rd. If other agreements have been reached in person, these take precedence. Despite the late deadline, you are still required to turn up to at least one practical session each week. I highly recommend attending more than one each week.
|
||||||
|
}
|
||||||
|
\section{Exercise 7}
|
||||||
|
|
||||||
|
\webversion{}{Since you are working on the final project, exercise 7 is optional. You are not required to
|
||||||
|
work it and there is no deadline.}
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Advanced Definition Principles}
|
||||||
|
|
||||||
|
Define the reflexive, transitive closure of a relation using 3 different methods as shown below.
|
||||||
|
\hol{RTC\_REL} is defining it using the inductive relation library. \hol{RTC\_DIRECT} is
|
||||||
|
using a simple higher order logic definition and \hol{RTC\_REC} uses recursion on natural numbers to define it. Show that all three definitions are defining the same function, \ie show
|
||||||
|
\hol{(RTC\_REL = RTC\_DIRECT) \holAnd{} (RTC\_REL = RTC\_REC) \holAnd{} (RTC\_REC = RTC\_DIRECT)}.
|
||||||
|
|
||||||
|
|
||||||
|
\begin{verbatim}
|
||||||
|
val (RTC_REL_rules, RTC_REL_ind, RTC_REL_cases) = Hol_reln `
|
||||||
|
(!x y. R x y ==> RTC_REL R x y) /\
|
||||||
|
(!x. RTC_REL R x x) /\
|
||||||
|
(!x y z. RTC_REL R x y /\ RTC_REL R y z ==> RTC_REL R x z)`
|
||||||
|
|
||||||
|
val RTC_DIRECT_def = new_definition ("RTC_DIRECT",
|
||||||
|
``RTC_DIRECT R = \(a:'a) (b:'a). !P.
|
||||||
|
((!x. P x x) /\ (!x y. R x y ==> P x y) /\
|
||||||
|
(!x y z. (P x y /\ P y z) ==> P x z)) ==>
|
||||||
|
(P a b)``);
|
||||||
|
|
||||||
|
val RTC_REC_NUM_def = Define `
|
||||||
|
(RTC_REC_NUM 0 R x y <=> (x = y)) /\
|
||||||
|
(RTC_REC_NUM (SUC n) R x y <=> (?z. R x z /\ RTC_REC_NUM n R z y))`
|
||||||
|
|
||||||
|
val RTC_REC_def = Define `
|
||||||
|
RTC_REC R x y = ?n. RTC_REC_NUM n R x y`;
|
||||||
|
\end{verbatim}
|
||||||
|
|
||||||
|
|
||||||
|
\subsection{Simplifier}
|
||||||
|
|
||||||
|
Dig out the code for \texttt{find\_contr\_in\_conj\_CONV} your wrote for execise 2, section 4.4. Create a simpset-fragment containing this conversion and use it on several examples.
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: t
|
||||||
|
%%% End:
|
32
exercises/func_array.dot
Normal file
32
exercises/func_array.dot
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
digraph G {
|
||||||
|
node_0 [label="1: 1"]
|
||||||
|
node_2 [label="3: 11"]
|
||||||
|
node_1 [label="2: 10"]
|
||||||
|
node_6 [label="7: 111"]
|
||||||
|
node_5 [label="6: 110"]
|
||||||
|
node_4 [label="5: 101"]
|
||||||
|
node_3 [label="4: 100"]
|
||||||
|
|
||||||
|
node_14 [label="15: 1111"]
|
||||||
|
node_13 [label="14: 1110"]
|
||||||
|
node_12 [label="13: 1101"]
|
||||||
|
node_11 [label="12: 1100"]
|
||||||
|
node_10 [label="11: 1011"]
|
||||||
|
node_9 [label="10: 1010"]
|
||||||
|
node_8 [label="9: 1001"]
|
||||||
|
node_7 [label="8: 1000"]
|
||||||
|
node_1 -> node_3 [label="r"]
|
||||||
|
node_1 -> node_5 [label="l"]
|
||||||
|
node_0 -> node_2 [label="l"]
|
||||||
|
node_0 -> node_1 [label="r"]
|
||||||
|
node_2 -> node_4 [label="r"]
|
||||||
|
node_2 -> node_6 [label="l"]
|
||||||
|
node_3 -> node_11 [label="l"]
|
||||||
|
node_3 -> node_7 [label="r"]
|
||||||
|
node_4 -> node_12 [label="l"]
|
||||||
|
node_4 -> node_8 [label="r"]
|
||||||
|
node_5 -> node_13 [label="l"]
|
||||||
|
node_5 -> node_9 [label="r"]
|
||||||
|
node_6 -> node_10 [label="r"]
|
||||||
|
node_6 -> node_14 [label="l"]
|
||||||
|
}
|
653
exercises/func_array.eps
Normal file
653
exercises/func_array.eps
Normal file
@ -0,0 +1,653 @@
|
|||||||
|
%!PS-Adobe-3.0 EPSF-3.0
|
||||||
|
%%Creator: graphviz version 2.38.0 (20140413.2041)
|
||||||
|
%%Title: G
|
||||||
|
%%Pages: 1
|
||||||
|
%%BoundingBox: 36 36 826 341
|
||||||
|
%%EndComments
|
||||||
|
save
|
||||||
|
%%BeginProlog
|
||||||
|
/DotDict 200 dict def
|
||||||
|
DotDict begin
|
||||||
|
|
||||||
|
/setupLatin1 {
|
||||||
|
mark
|
||||||
|
/EncodingVector 256 array def
|
||||||
|
EncodingVector 0
|
||||||
|
|
||||||
|
ISOLatin1Encoding 0 255 getinterval putinterval
|
||||||
|
EncodingVector 45 /hyphen put
|
||||||
|
|
||||||
|
% Set up ISO Latin 1 character encoding
|
||||||
|
/starnetISO {
|
||||||
|
dup dup findfont dup length dict begin
|
||||||
|
{ 1 index /FID ne { def }{ pop pop } ifelse
|
||||||
|
} forall
|
||||||
|
/Encoding EncodingVector def
|
||||||
|
currentdict end definefont
|
||||||
|
} def
|
||||||
|
/Times-Roman starnetISO def
|
||||||
|
/Times-Italic starnetISO def
|
||||||
|
/Times-Bold starnetISO def
|
||||||
|
/Times-BoldItalic starnetISO def
|
||||||
|
/Helvetica starnetISO def
|
||||||
|
/Helvetica-Oblique starnetISO def
|
||||||
|
/Helvetica-Bold starnetISO def
|
||||||
|
/Helvetica-BoldOblique starnetISO def
|
||||||
|
/Courier starnetISO def
|
||||||
|
/Courier-Oblique starnetISO def
|
||||||
|
/Courier-Bold starnetISO def
|
||||||
|
/Courier-BoldOblique starnetISO def
|
||||||
|
cleartomark
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
%%BeginResource: procset graphviz 0 0
|
||||||
|
/coord-font-family /Times-Roman def
|
||||||
|
/default-font-family /Times-Roman def
|
||||||
|
/coordfont coord-font-family findfont 8 scalefont def
|
||||||
|
|
||||||
|
/InvScaleFactor 1.0 def
|
||||||
|
/set_scale {
|
||||||
|
dup 1 exch div /InvScaleFactor exch def
|
||||||
|
scale
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
% styles
|
||||||
|
/solid { [] 0 setdash } bind def
|
||||||
|
/dashed { [9 InvScaleFactor mul dup ] 0 setdash } bind def
|
||||||
|
/dotted { [1 InvScaleFactor mul 6 InvScaleFactor mul] 0 setdash } bind def
|
||||||
|
/invis {/fill {newpath} def /stroke {newpath} def /show {pop newpath} def} bind def
|
||||||
|
/bold { 2 setlinewidth } bind def
|
||||||
|
/filled { } bind def
|
||||||
|
/unfilled { } bind def
|
||||||
|
/rounded { } bind def
|
||||||
|
/diagonals { } bind def
|
||||||
|
/tapered { } bind def
|
||||||
|
|
||||||
|
% hooks for setting color
|
||||||
|
/nodecolor { sethsbcolor } bind def
|
||||||
|
/edgecolor { sethsbcolor } bind def
|
||||||
|
/graphcolor { sethsbcolor } bind def
|
||||||
|
/nopcolor {pop pop pop} bind def
|
||||||
|
|
||||||
|
/beginpage { % i j npages
|
||||||
|
/npages exch def
|
||||||
|
/j exch def
|
||||||
|
/i exch def
|
||||||
|
/str 10 string def
|
||||||
|
npages 1 gt {
|
||||||
|
gsave
|
||||||
|
coordfont setfont
|
||||||
|
0 0 moveto
|
||||||
|
(\() show i str cvs show (,) show j str cvs show (\)) show
|
||||||
|
grestore
|
||||||
|
} if
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
/set_font {
|
||||||
|
findfont exch
|
||||||
|
scalefont setfont
|
||||||
|
} def
|
||||||
|
|
||||||
|
% draw text fitted to its expected width
|
||||||
|
/alignedtext { % width text
|
||||||
|
/text exch def
|
||||||
|
/width exch def
|
||||||
|
gsave
|
||||||
|
width 0 gt {
|
||||||
|
[] 0 setdash
|
||||||
|
text stringwidth pop width exch sub text length div 0 text ashow
|
||||||
|
} if
|
||||||
|
grestore
|
||||||
|
} def
|
||||||
|
|
||||||
|
/boxprim { % xcorner ycorner xsize ysize
|
||||||
|
4 2 roll
|
||||||
|
moveto
|
||||||
|
2 copy
|
||||||
|
exch 0 rlineto
|
||||||
|
0 exch rlineto
|
||||||
|
pop neg 0 rlineto
|
||||||
|
closepath
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
/ellipse_path {
|
||||||
|
/ry exch def
|
||||||
|
/rx exch def
|
||||||
|
/y exch def
|
||||||
|
/x exch def
|
||||||
|
matrix currentmatrix
|
||||||
|
newpath
|
||||||
|
x y translate
|
||||||
|
rx ry scale
|
||||||
|
0 0 1 0 360 arc
|
||||||
|
setmatrix
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
/endpage { showpage } bind def
|
||||||
|
/showpage { } def
|
||||||
|
|
||||||
|
/layercolorseq
|
||||||
|
[ % layer color sequence - darkest to lightest
|
||||||
|
[0 0 0]
|
||||||
|
[.2 .8 .8]
|
||||||
|
[.4 .8 .8]
|
||||||
|
[.6 .8 .8]
|
||||||
|
[.8 .8 .8]
|
||||||
|
]
|
||||||
|
def
|
||||||
|
|
||||||
|
/layerlen layercolorseq length def
|
||||||
|
|
||||||
|
/setlayer {/maxlayer exch def /curlayer exch def
|
||||||
|
layercolorseq curlayer 1 sub layerlen mod get
|
||||||
|
aload pop sethsbcolor
|
||||||
|
/nodecolor {nopcolor} def
|
||||||
|
/edgecolor {nopcolor} def
|
||||||
|
/graphcolor {nopcolor} def
|
||||||
|
} bind def
|
||||||
|
|
||||||
|
/onlayer { curlayer ne {invis} if } def
|
||||||
|
|
||||||
|
/onlayers {
|
||||||
|
/myupper exch def
|
||||||
|
/mylower exch def
|
||||||
|
curlayer mylower lt
|
||||||
|
curlayer myupper gt
|
||||||
|
or
|
||||||
|
{invis} if
|
||||||
|
} def
|
||||||
|
|
||||||
|
/curlayer 0 def
|
||||||
|
|
||||||
|
%%EndResource
|
||||||
|
%%EndProlog
|
||||||
|
%%BeginSetup
|
||||||
|
14 default-font-family set_font
|
||||||
|
1 setmiterlimit
|
||||||
|
% /arrowlength 10 def
|
||||||
|
% /arrowwidth 5 def
|
||||||
|
|
||||||
|
% make sure pdfmark is harmless for PS-interpreters other than Distiller
|
||||||
|
/pdfmark where {pop} {userdict /pdfmark /cleartomark load put} ifelse
|
||||||
|
% make '<<' and '>>' safe on PS Level 1 devices
|
||||||
|
/languagelevel where {pop languagelevel}{1} ifelse
|
||||||
|
2 lt {
|
||||||
|
userdict (<<) cvn ([) cvn load put
|
||||||
|
userdict (>>) cvn ([) cvn load put
|
||||||
|
} if
|
||||||
|
|
||||||
|
%%EndSetup
|
||||||
|
setupLatin1
|
||||||
|
%%Page: 1 1
|
||||||
|
%%PageBoundingBox: 36 36 826 341
|
||||||
|
%%PageOrientation: Portrait
|
||||||
|
0 0 1 beginpage
|
||||||
|
gsave
|
||||||
|
36 36 790 305 boxprim clip newpath
|
||||||
|
1 1 set_scale 0 rotate 40 40 translate
|
||||||
|
% node_0
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
424.25 279 27 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
413.25 275.3 moveto 22 (1: 1) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_2
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
273.25 192 28.7 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
259.25 188.3 moveto 28 (3: 11) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_0->node_2
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 404.19 266.71 moveto
|
||||||
|
378.17 252.06 332.54 226.38 302.61 209.53 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 304.3 206.46 moveto
|
||||||
|
293.87 204.61 lineto
|
||||||
|
300.86 212.56 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 304.3 206.46 moveto
|
||||||
|
293.87 204.61 lineto
|
||||||
|
300.86 212.56 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
358.25 231.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_1
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
493.25 192 28.7 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
479.25 188.3 moveto 28 (2: 10) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_0->node_1
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 436.58 262.8 moveto
|
||||||
|
447.11 249.83 462.42 230.97 474.48 216.12 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 477.2 218.32 moveto
|
||||||
|
480.78 208.35 lineto
|
||||||
|
471.76 213.91 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 477.2 218.32 moveto
|
||||||
|
480.78 208.35 lineto
|
||||||
|
471.76 213.91 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
463.25 231.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_6
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
144.25 105 33.29 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
126.75 101.3 moveto 35 (7: 111) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_2->node_6
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 254.21 178.46 moveto
|
||||||
|
232.86 164.39 198 141.42 173.25 125.11 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 174.91 122.02 moveto
|
||||||
|
164.64 119.44 lineto
|
||||||
|
171.06 127.86 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 174.91 122.02 moveto
|
||||||
|
164.64 119.44 lineto
|
||||||
|
171.06 127.86 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
217.25 144.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_4
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
273.25 105 33.29 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
255.75 101.3 moveto 35 (5: 101) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_2->node_4
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 273.25 173.8 moveto
|
||||||
|
273.25 162.16 273.25 146.55 273.25 133.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 276.75 133.18 moveto
|
||||||
|
273.25 123.18 lineto
|
||||||
|
269.75 133.18 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 276.75 133.18 moveto
|
||||||
|
273.25 123.18 lineto
|
||||||
|
269.75 133.18 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
273.25 144.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_5
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
493.25 105 33.29 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
475.75 101.3 moveto 35 (6: 110) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_1->node_5
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 493.25 173.8 moveto
|
||||||
|
493.25 162.16 493.25 146.55 493.25 133.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 496.75 133.18 moveto
|
||||||
|
493.25 123.18 lineto
|
||||||
|
489.75 133.18 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 496.75 133.18 moveto
|
||||||
|
493.25 123.18 lineto
|
||||||
|
489.75 133.18 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
493.25 144.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_3
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
646.25 105 33.29 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
628.75 101.3 moveto 35 (4: 100) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_1->node_3
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 513.88 179.54 moveto
|
||||||
|
539.9 165.08 584.88 140.09 615.09 123.31 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 616.92 126.3 moveto
|
||||||
|
623.96 118.38 lineto
|
||||||
|
613.52 120.18 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 616.92 126.3 moveto
|
||||||
|
623.96 118.38 lineto
|
||||||
|
613.52 120.18 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
579.25 144.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_14
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
42.25 18 42.49 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
17.75 14.3 moveto 49 (15: 1111) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_6->node_14
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 126.94 89.58 moveto
|
||||||
|
110.97 76.27 87.03 56.32 68.66 41.01 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 70.59 38.07 moveto
|
||||||
|
60.67 34.35 lineto
|
||||||
|
66.11 43.44 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 70.59 38.07 moveto
|
||||||
|
60.67 34.35 lineto
|
||||||
|
66.11 43.44 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
100.25 57.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_10
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
144.25 18 42.49 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
119.75 14.3 moveto 49 (11: 1011) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_6->node_10
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 144.25 86.8 moveto
|
||||||
|
144.25 75.16 144.25 59.55 144.25 46.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 147.75 46.18 moveto
|
||||||
|
144.25 36.18 lineto
|
||||||
|
140.75 46.18 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 147.75 46.18 moveto
|
||||||
|
144.25 36.18 lineto
|
||||||
|
140.75 46.18 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
144.25 57.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_13
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
442.25 18 42.49 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
417.75 14.3 moveto 49 (14: 1110) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_5->node_13
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 483.41 87.61 moveto
|
||||||
|
476.05 75.34 465.85 58.34 457.44 44.32 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 460.37 42.4 moveto
|
||||||
|
452.22 35.63 lineto
|
||||||
|
454.37 46 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 460.37 42.4 moveto
|
||||||
|
452.22 35.63 lineto
|
||||||
|
454.37 46 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
471.25 57.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_9
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
544.25 18 42.49 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
519.75 14.3 moveto 49 (10: 1010) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_5->node_9
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 503.08 87.61 moveto
|
||||||
|
510.44 75.34 520.64 58.34 529.05 44.32 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 532.13 46 moveto
|
||||||
|
534.27 35.63 lineto
|
||||||
|
526.12 42.4 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 532.13 46 moveto
|
||||||
|
534.27 35.63 lineto
|
||||||
|
526.12 42.4 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
522.25 57.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_12
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
246.25 18 42.49 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
221.75 14.3 moveto 49 (13: 1101) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_4->node_12
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 267.91 87.21 moveto
|
||||||
|
264.16 75.41 259.07 59.38 254.77 45.82 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 258.03 44.52 moveto
|
||||||
|
251.66 36.05 lineto
|
||||||
|
251.35 46.64 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 258.03 44.52 moveto
|
||||||
|
251.66 36.05 lineto
|
||||||
|
251.35 46.64 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
262.25 57.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_8
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
344.25 18 37.89 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
323.25 14.3 moveto 42 (9: 1001) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_4->node_8
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 286.27 88.41 moveto
|
||||||
|
296.84 75.75 311.94 57.68 324.08 43.15 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 327.1 44.99 moveto
|
||||||
|
330.82 35.07 lineto
|
||||||
|
321.73 40.5 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 327.1 44.99 moveto
|
||||||
|
330.82 35.07 lineto
|
||||||
|
321.73 40.5 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
313.25 57.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_11
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
646.25 18 42.49 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
621.75 14.3 moveto 49 (12: 1100) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_3->node_11
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 646.25 86.8 moveto
|
||||||
|
646.25 75.16 646.25 59.55 646.25 46.24 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 649.75 46.18 moveto
|
||||||
|
646.25 36.18 lineto
|
||||||
|
642.75 46.18 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 649.75 46.18 moveto
|
||||||
|
646.25 36.18 lineto
|
||||||
|
642.75 46.18 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
646.25 57.8 moveto 4 (l) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_7
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 nodecolor
|
||||||
|
744.25 18 37.89 18 ellipse_path stroke
|
||||||
|
0 0 0 nodecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
723.25 14.3 moveto 42 (8: 1000) alignedtext
|
||||||
|
grestore
|
||||||
|
% node_3->node_7
|
||||||
|
gsave
|
||||||
|
1 setlinewidth
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 663.32 89.19 moveto
|
||||||
|
678.72 75.83 701.59 55.99 719.09 40.82 curveto
|
||||||
|
stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 721.45 43.4 moveto
|
||||||
|
726.71 34.21 lineto
|
||||||
|
716.86 38.12 lineto
|
||||||
|
closepath fill
|
||||||
|
1 setlinewidth
|
||||||
|
solid
|
||||||
|
0 0 0 edgecolor
|
||||||
|
newpath 721.45 43.4 moveto
|
||||||
|
726.71 34.21 lineto
|
||||||
|
716.86 38.12 lineto
|
||||||
|
closepath stroke
|
||||||
|
0 0 0 edgecolor
|
||||||
|
14 /Times-Roman set_font
|
||||||
|
701.25 57.8 moveto 5 (r) alignedtext
|
||||||
|
grestore
|
||||||
|
endpage
|
||||||
|
showpage
|
||||||
|
grestore
|
||||||
|
%%PageTrailer
|
||||||
|
%%EndPage: 1
|
||||||
|
%%Trailer
|
||||||
|
end
|
||||||
|
restore
|
||||||
|
%%EOF
|
65
exercises/philScript.sml
Normal file
65
exercises/philScript.sml
Normal file
@ -0,0 +1,65 @@
|
|||||||
|
open HolKernel Parse boolLib bossLib;
|
||||||
|
|
||||||
|
val _ = new_theory "phil";
|
||||||
|
|
||||||
|
val _ = Datatype `Philosopher = diogenes | platon | euklid`;
|
||||||
|
val Philosopher_nchotomy = DB.fetch "-" "Philosopher_nchotomy";
|
||||||
|
val Philosopher_distinct = DB.fetch "-" "Philosopher_distinct";
|
||||||
|
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE = new_specification ("PHIL_KNOWLEDGE", ["At", "Sp", "W", "B"],
|
||||||
|
prove (``?At Sp W B.
|
||||||
|
(!p. (Sp p ==> B p)) /\
|
||||||
|
(!p. (At p ==> W p)) /\
|
||||||
|
(!p. ~(Sp p) \/ ~(At p)) /\
|
||||||
|
(!p. (Sp p) \/ (At p)) /\
|
||||||
|
((Sp platon) ==> ~(W diogenes)) /\
|
||||||
|
((Sp euklid) ==> ~(B diogenes)) /\
|
||||||
|
((At diogenes) ==> ~(B euklid)) /\
|
||||||
|
((At platon) ==> ~(W euklid))``,
|
||||||
|
|
||||||
|
Q.EXISTS_TAC `\p. Philosopher_CASE p F F T` THEN
|
||||||
|
Q.EXISTS_TAC `\p. Philosopher_CASE p T T F` THEN
|
||||||
|
Q.EXISTS_TAC `\p. Philosopher_CASE p F T T` THEN
|
||||||
|
Q.EXISTS_TAC `\p. Philosopher_CASE p T T F` THEN
|
||||||
|
SIMP_TAC (srw_ss()++DatatypeSimps.expand_type_quants_ss [``:Philosopher``]) []));
|
||||||
|
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_a = store_thm ("PHIL_KNOWLEDGE_a", ``!p. Sp p ==> B p``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_b = store_thm ("PHIL_KNOWLEDGE_b", ``!p. At p ==> W p``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_c = store_thm ("PHIL_KNOWLEDGE_c", ``!p. ~(Sp p) \/ ~(At p)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_c1 = store_thm ("PHIL_KNOWLEDGE_c1", ``!p. Sp p ==> ~(At p)``,
|
||||||
|
PROVE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_c2 = store_thm ("PHIL_KNOWLEDGE_c2", ``!p. At p ==> ~(Sp p)``,
|
||||||
|
PROVE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_d = store_thm ("PHIL_KNOWLEDGE_d", ``!p. (Sp p) \/ (At p)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_d1 = store_thm ("PHIL_KNOWLEDGE_d1", ``!p. ~(Sp p) ==> At p``,
|
||||||
|
PROVE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_d2 = store_thm ("PHIL_KNOWLEDGE_d2", ``!p. ~(At p) ==> Sp p``,
|
||||||
|
PROVE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_e = store_thm ("PHIL_KNOWLEDGE_e", ``(Sp platon) ==> ~(W diogenes)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_f = store_thm ("PHIL_KNOWLEDGE_f", ``(Sp euklid) ==> ~(B diogenes)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_g = store_thm ("PHIL_KNOWLEDGE_g", ``(At diogenes) ==> ~(B euklid)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val PHIL_KNOWLEDGE_h = store_thm ("PHIL_KNOWLEDGE_h", ``(At platon) ==> ~(W euklid)``,
|
||||||
|
REWRITE_TAC[PHIL_KNOWLEDGE]);
|
||||||
|
|
||||||
|
val _ = export_theory();
|
||||||
|
|
19
hol_commands.inc
Normal file
19
hol_commands.inc
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
\newcommand{\ie}{i.\,e.\ }
|
||||||
|
\newcommand{\eg}{e.\,g.\ }
|
||||||
|
\newcommand{\wrt}{w.\,r.\,t.\ }
|
||||||
|
\newcommand{\aka}{a.\,k.\,a.\ }
|
||||||
|
\newcommand{\cf}{cf.\ }
|
||||||
|
\newcommand{\etc}{etc.\ }
|
||||||
|
\newcommand{\entails}{\vdash}
|
||||||
|
\newcommand{\hol}[1]{\texttt{#1}}
|
||||||
|
\newcommand{\ml}[1]{\texttt{#1}}
|
||||||
|
\newcommand{\textbsl}{\char`\\{}}
|
||||||
|
\newcommand{\holAnd}{/\textbsl{}}
|
||||||
|
\newcommand{\holOr}{\textbsl{}/}
|
||||||
|
\newcommand{\holLambda}{\textbsl{}}
|
||||||
|
\newcommand{\holImp}{==>}
|
||||||
|
\newcommand{\holEquiv}{<=>}
|
||||||
|
\newcommand{\holNeg}{\raisebox{0.5ex}{\texttildelow}}
|
||||||
|
|
||||||
|
|
||||||
|
\newcommand{\webversion}[2]{\ifdefined\ttwebflag #1 \else #2 \fi}
|
40
lectures/00_webpage_intro.tex
Normal file
40
lectures/00_webpage_intro.tex
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
\part{Preface}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Preface}
|
||||||
|
\begin{itemize}
|
||||||
|
\item these slides originate from a course for advanced master students
|
||||||
|
\item it was given by the PROSPER group at KTH in Stockholm in 2017 (see \small{\url{https://www.kth.se/social/group/interactive-theorem-}})
|
||||||
|
\item the course focused on how to use HOL~4
|
||||||
|
\item students taking the course were expected to
|
||||||
|
\begin{itemize}
|
||||||
|
\item know functional programming, esp.\ SML
|
||||||
|
\item understand predicate logic
|
||||||
|
\item have some experience with pen and paper proofs
|
||||||
|
\end{itemize}
|
||||||
|
\item the course consisted of 9 lectures, which each took 90 minutes
|
||||||
|
\item there were 19 supervised practical sessions, which each took 2 h
|
||||||
|
\item usually there was 1 lecture and 2 practical sessions each week
|
||||||
|
\item students were expected to work about 10 h each week on exercises
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Preface II}
|
||||||
|
\begin{itemize}
|
||||||
|
\item usually, these slides present concepts and some high-level entry points
|
||||||
|
\item often some more details were explained than covered on the slides
|
||||||
|
\item technical details were covered in the practical sessions
|
||||||
|
\item they are provided as they are in the hope that they are useful\footnote{if you find errors, please contact Thomas Tuerk} (there are no guarentees of correctness :-))
|
||||||
|
\item the exercise question-sheets are available as well
|
||||||
|
\item if you have questions, feel free to contact Thomas Tuerk (\texttt{thomas@tuerk-brechen.de})
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "hol"
|
||||||
|
%%% End:
|
261
lectures/01_introduction.tex
Normal file
261
lectures/01_introduction.tex
Normal file
@ -0,0 +1,261 @@
|
|||||||
|
\part{Introduction}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\section{Motivation}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Motivation}
|
||||||
|
\begin{itemize}
|
||||||
|
\item Complex systems almost certainly contain bugs.
|
||||||
|
\item Critical systems (\eg avionics) need to meet very high standards.
|
||||||
|
\item It is infeasible in practice to achieve such high standards just by testing.
|
||||||
|
\item Debugging via testing suffers from diminishing returns.
|
||||||
|
\end{itemize}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\begin{raggedleft}
|
||||||
|
\emph{``Program testing can be used to show the presence\\
|
||||||
|
of bugs, but never to show their absence!''\\
|
||||||
|
--- Edsger W. Dijkstra\\}
|
||||||
|
\end{raggedleft}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Famous Bugs}
|
||||||
|
\begin{itemize}
|
||||||
|
\item Pentium FDIV bug (1994)\\(missing entry in lookup table, \$475 million damage)
|
||||||
|
\item Ariane V explosion (1996)\\(integer overflow, \$1 billion prototype destroyed)
|
||||||
|
\item Mars Climate Orbiter (1999)\\(destroyed in Mars orbit, mixup of units pound-force and newtons)
|
||||||
|
\item Knight Capital Group Error in Ultra Short Time Trading (2012)\\
|
||||||
|
(faulty deployment, repurposing of critical flag, \$440 lost in 45 min on stock exchange)
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Fun to read}
|
||||||
|
\url{http://www.cs.tau.ac.il/~nachumd/verify/horror.html}
|
||||||
|
\url{https://en.wikipedia.org/wiki/List_of_software_bugs}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Proof}
|
||||||
|
\begin{itemize}
|
||||||
|
\item proof can show absence of errors in design
|
||||||
|
\item but proofs talk about a \emph{design}, not a \emph{real system}
|
||||||
|
\item $\Rightarrow$ testing and proving complement each other
|
||||||
|
\end{itemize}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\begin{raggedleft}
|
||||||
|
\emph{``As far as the laws of mathematics\\
|
||||||
|
refer to reality, they are not certain;\\
|
||||||
|
and as far as they are certain,\\
|
||||||
|
they do not refer to reality.''\\
|
||||||
|
--- Albert Einstein\\}
|
||||||
|
\end{raggedleft}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Types of Proofs}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Mathematical vs.\ Formal Proof}
|
||||||
|
|
||||||
|
\begin{columns}
|
||||||
|
\begin{column}{0.45\textwidth}
|
||||||
|
|
||||||
|
\begin{block}{Mathematical Proof}
|
||||||
|
\begin{itemize}
|
||||||
|
\item informal, convince other mathematicians
|
||||||
|
\item checked by community of domain experts
|
||||||
|
\item subtle errors are hard to find
|
||||||
|
\item often provide some new insight about our world
|
||||||
|
\item often short, but require creativity and a brilliant idea
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{column}
|
||||||
|
|
||||||
|
\begin{column}{0.45\textwidth}
|
||||||
|
\begin{block}{Formal Proof}
|
||||||
|
\begin{itemize}
|
||||||
|
\item formal, rigorously use a logical formalism
|
||||||
|
\item checkable by \textit{stupid} machines
|
||||||
|
\item very reliable
|
||||||
|
\item often contain no new ideas and no amazing insights
|
||||||
|
\item often long, very tedious, but largely trivial
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{column}
|
||||||
|
\end{columns}
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\textbf{We are interested in formal proofs in this lecture.}
|
||||||
|
\end{center}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
% \begin{frame}
|
||||||
|
% \frametitle{Detail Level of Formal Proof}
|
||||||
|
|
||||||
|
% \begin{center}
|
||||||
|
% In \emph{Principia Mathematica} it takes 300 pages to prove 1+1=2.
|
||||||
|
% \bigskip
|
||||||
|
|
||||||
|
% This is nicely illustrated in \emph{Logicomix - An Epic Search for Truth}.
|
||||||
|
% \includegraphics[width=10cm]{images/1+12_Logicomix.png}
|
||||||
|
% \end{center}
|
||||||
|
% \end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Automated vs Manual (Formal) Proof}
|
||||||
|
|
||||||
|
\begin{block}{Fully Manual Proof}
|
||||||
|
\begin{itemize}
|
||||||
|
\item very tedious; one has to grind through many trivial but detailed proofs
|
||||||
|
\item easy to make mistakes
|
||||||
|
\item hard to keep track of all assumptions and preconditions
|
||||||
|
\item hard to maintain, if something changes (see Ariane V)
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Automated Proof}
|
||||||
|
\begin{itemize}
|
||||||
|
\item amazing success in certain areas
|
||||||
|
\item but still often infeasible for interesting problems
|
||||||
|
\item hard to get insights in case a proof attempt fails
|
||||||
|
\item even if it works, it is often not that automated
|
||||||
|
\begin{itemize}
|
||||||
|
\item run automated tool for a few days
|
||||||
|
\item abort, change command line arguments to use different heuristics
|
||||||
|
\item run again and iterate till you find a set of heuristics that prove it fully automatically in a few seconds
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Interactive Proofs}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item combine strengths of manual and automated proofs
|
||||||
|
\item many different options to combine automated and manual proofs
|
||||||
|
\begin{itemize}
|
||||||
|
\item mainly check existing proofs (\eg HOL Zero)
|
||||||
|
\item user mainly provides lemmata statements, computer searches proofs using previous lemmata and very few hints (\eg ACL 2)
|
||||||
|
\item most systems are somewhere in the middle
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\item typically the human user
|
||||||
|
\begin{itemize}
|
||||||
|
\item provides insights into the problem
|
||||||
|
\item structures the proof
|
||||||
|
\item provides main arguments
|
||||||
|
\end{itemize}
|
||||||
|
\item typically the computer
|
||||||
|
\begin{itemize}
|
||||||
|
\item checks proof
|
||||||
|
\item keeps track of all used assumptions
|
||||||
|
\item provides automation to grind through lengthy, but trivial proofs
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Typical Interactive Proof Activities}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item provide precise definitions of concepts
|
||||||
|
\item state properties of these concepts
|
||||||
|
\item prove these properties
|
||||||
|
\begin{itemize}
|
||||||
|
\item human provides insight and structure
|
||||||
|
\item computer does book-keeping and automates simple proofs
|
||||||
|
\end{itemize}
|
||||||
|
\item build and use libraries of formal definitions and proofs
|
||||||
|
\begin{itemize}
|
||||||
|
\item formalisations of mathematical theories like
|
||||||
|
\begin{itemize}
|
||||||
|
\item lists, sets, bags, \ldots
|
||||||
|
\item real numbers
|
||||||
|
\item probability theory
|
||||||
|
\end{itemize}
|
||||||
|
\item specifications of real-world artefacts like
|
||||||
|
\begin{itemize}
|
||||||
|
\item processors
|
||||||
|
\item programming languages
|
||||||
|
\item network protocols
|
||||||
|
\end{itemize}
|
||||||
|
\item reasoning tools
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\textbf{There is a strong connection with programming.\\Lessons learned in Software Engineering apply.}
|
||||||
|
\end{center}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Interactive Theorem Provers}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Different Interactive Provers}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item there are many different interactive provers, \eg
|
||||||
|
\begin{itemize}
|
||||||
|
\item Isabelle/HOL
|
||||||
|
\item Coq
|
||||||
|
\item PVS
|
||||||
|
\item HOL family of provers
|
||||||
|
\item ACL2
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\item important differences
|
||||||
|
\begin{itemize}
|
||||||
|
\item the formalism used
|
||||||
|
\item level of trustworthiness
|
||||||
|
\item level of automation
|
||||||
|
\item libraries
|
||||||
|
\item languages for writing proofs
|
||||||
|
\item user interface
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Which theorem prover is the best one? :-)}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item there is no \alert{best} theorem prover
|
||||||
|
\item better question: Which is the \alert{best one for a certain purpose}?
|
||||||
|
\item important points to consider
|
||||||
|
\begin{itemize}
|
||||||
|
\item existing libraries
|
||||||
|
\item used logic
|
||||||
|
\item level of automation
|
||||||
|
\item user interface
|
||||||
|
\item importance development speed versus trustworthiness
|
||||||
|
\item How familiar are you with the different provers?
|
||||||
|
\item Which prover do people in your vicinity use?
|
||||||
|
\item your personal preferences
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\bottomstatement{In this course we use the HOL theorem prover,\\ because it is used by the TCS group.}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "full"
|
||||||
|
%%% End:
|
159
lectures/02_organisational_matters.tex
Normal file
159
lectures/02_organisational_matters.tex
Normal file
@ -0,0 +1,159 @@
|
|||||||
|
\part{Organisational Matters}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Aims of this Course}
|
||||||
|
|
||||||
|
\begin{block}{Aims}
|
||||||
|
\begin{itemize}
|
||||||
|
\item introduction to interactive theorem proving (ITP)
|
||||||
|
\item being able to evaluate whether a problem can benefit from ITP
|
||||||
|
\item hands-on experience with HOL
|
||||||
|
\item learn how to build a formal model
|
||||||
|
\item learn how to express and prove important properties of such a model
|
||||||
|
\item learn about basic conformance testing
|
||||||
|
\item use a theorem prover on a small project
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Required Prerequisites}
|
||||||
|
\begin{itemize}
|
||||||
|
\item some experience with functional programming
|
||||||
|
\item knowing Standard ML syntax
|
||||||
|
\item basic knowledge about logic (\eg First Order Logic)
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Dates}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item Interactive Theorem Proving Course takes place in Period 4 of the academic year 2016/2017
|
||||||
|
\item always in room 4523 or 4532
|
||||||
|
\item each week\\\medskip\qquad
|
||||||
|
\begin{tabular}{lll}
|
||||||
|
Mondays & 10:15 - 11:45 & lecture \\
|
||||||
|
Wednesdays & 10:00 - 12:00 & practical session \\
|
||||||
|
Fridays & 13:00 - 15:00 & practical session
|
||||||
|
\end{tabular}
|
||||||
|
\item no lecture on Monday, 1st of May, instead on Wednesday, 3rd May
|
||||||
|
\item last lecture: 12th of June
|
||||||
|
\item last practical session: 21st of June
|
||||||
|
\item 9 lectures, 17 practical sessions
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Exercises}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item after each lecture an exercise sheet is handed out
|
||||||
|
\item work on these exercises alone, except if stated otherwise explicitly
|
||||||
|
\item exercise sheet contains due date
|
||||||
|
\begin{itemize}
|
||||||
|
\item usually 10 days time to work on it
|
||||||
|
\item hand in during practical sessions
|
||||||
|
\item lecture Monday $\longrightarrow$ hand in at latest in next week's Friday session
|
||||||
|
\end{itemize}
|
||||||
|
\item main purpose: understanding ITP and learn how to use HOL
|
||||||
|
\begin{itemize}
|
||||||
|
\item no detailed grading, just pass/fail
|
||||||
|
\item retries possible till pass
|
||||||
|
\item if stuck, ask me or one another
|
||||||
|
\item practical sessions intend to provide this opportunity
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Practical Sessions}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item very informal
|
||||||
|
\item main purpose: work on exercises
|
||||||
|
\begin{itemize}
|
||||||
|
\item I have a look and provide feedback
|
||||||
|
\item you can ask questions
|
||||||
|
\item I might sometimes explain things not covered in the lectures
|
||||||
|
\item I might provide some concrete tips and tricks
|
||||||
|
\item you can also discuss with each other
|
||||||
|
\end{itemize}
|
||||||
|
\item attendance not required, but highly recommended
|
||||||
|
\begin{itemize}
|
||||||
|
\item exception: session on 21st April
|
||||||
|
\end{itemize}
|
||||||
|
\item only requirement: turn up long enough to hand in exercises
|
||||||
|
\item \alert{you need to bring your own computer}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Handing-in Exercises}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item exercises are intended to be handed-in during practical sessions
|
||||||
|
\item attend at least one practical session each week
|
||||||
|
\item leave reasonable time to discuss exercises
|
||||||
|
\begin{itemize}
|
||||||
|
\item don't try to hand your solution in Friday 14:55
|
||||||
|
\end{itemize}
|
||||||
|
\item retries possible, but reasonable attempt before deadline required
|
||||||
|
\item handing-in outside practical sessions
|
||||||
|
\begin{itemize}
|
||||||
|
\item only if you have a good reason
|
||||||
|
\item decided on a case-by-case basis
|
||||||
|
\end{itemize}
|
||||||
|
\item electronic hand-ins
|
||||||
|
\begin{itemize}
|
||||||
|
\item only to get detailed feedback
|
||||||
|
\item does not replace personal hand-in
|
||||||
|
\item exceptions on a case-by-case basis if there is a good reason
|
||||||
|
\item I recommend using a KTH GitHub repo
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Passing the ITP Course}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item there is only a pass/fail mark
|
||||||
|
\item to pass you need to
|
||||||
|
\begin{itemize}
|
||||||
|
\item attend at least 7 of the 9 lectures
|
||||||
|
\item pass 8 of the 9 exercises
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Communication}
|
||||||
|
\begin{itemize}
|
||||||
|
\item we have the advantage of being a small group
|
||||||
|
\item therefore we are flexible
|
||||||
|
\item so please ask questions, even during lectures
|
||||||
|
\item there are many shy people, therefore
|
||||||
|
\begin{itemize}
|
||||||
|
\item anonymous checklist after each lecture
|
||||||
|
\item anonymous background questionnaire in first practical session
|
||||||
|
\end{itemize}
|
||||||
|
\item further information is posted on \emph{Interactive Theorem Proving Course} group on Group Web
|
||||||
|
\item contact me (Thomas Tuerk) directly, \eg via email \texttt{thomas@kth.se}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "full"
|
||||||
|
%%% End:
|
175
lectures/03_hol_overview.tex
Normal file
175
lectures/03_hol_overview.tex
Normal file
@ -0,0 +1,175 @@
|
|||||||
|
\part{HOL~4 History and Architecture}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\section{LCF}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{LCF - Logic of Computable Functions}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \emph{Standford LCF} 1971-72 by Milner et al.
|
||||||
|
\item formalism devised by Dana Scott in 1969
|
||||||
|
\item intended to reason about recursively defined functions
|
||||||
|
\item intended for computer science applications
|
||||||
|
\item strengths
|
||||||
|
\begin{itemize}
|
||||||
|
\item powerful simplification mechanism
|
||||||
|
\item support for backward proof
|
||||||
|
\end{itemize}
|
||||||
|
\item limitations
|
||||||
|
\begin{itemize}
|
||||||
|
\item proofs need a lot of memory
|
||||||
|
\item fixed, hard-coded set of proof commands
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{LCF - Logic of Computable Functions II}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item Milner worked on improving LCF in Edinburgh
|
||||||
|
\item research assistants
|
||||||
|
\begin{itemize}
|
||||||
|
\item Lockwood Morris
|
||||||
|
\item Malcolm Newey
|
||||||
|
\item Chris Wadsworth
|
||||||
|
\item Mike Gordon
|
||||||
|
\end{itemize}
|
||||||
|
\item \emph{Edinburgh LCF} 1979
|
||||||
|
\item introduction of \emph{Meta Language} (ML)
|
||||||
|
\item ML was invented to write proof procedures
|
||||||
|
\item ML became an influential functional programming language
|
||||||
|
\item using ML allowed implementing the \emph{LCF approach}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{LCF Approach}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item implement an abstract datatype \alert{thm} to represent theorems
|
||||||
|
\item semantics of ML ensure that values of type thm can only be created using its interface
|
||||||
|
\item interface is very small
|
||||||
|
\begin{itemize}
|
||||||
|
\item predefined theorems are axioms
|
||||||
|
\item function with result type theorem are inferences
|
||||||
|
\end{itemize}
|
||||||
|
\item interface is carefully designed and checked
|
||||||
|
\begin{itemize}
|
||||||
|
\item size of interface and implementation allow careful checking
|
||||||
|
\item one checks that the interface really implements only axioms and inferences that are valid in the used logic
|
||||||
|
\end{itemize}
|
||||||
|
\item \emph{However you create a theorem, there is a proof for it.}
|
||||||
|
\item together with similar abstract datatypes for types and terms, this forms the \alert{kernel}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{LCF Approach II}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Modus Ponens Example}
|
||||||
|
\begin{columns}
|
||||||
|
\begin{column}{.4\textwidth}
|
||||||
|
\textbf{Inference Rule}\\\medskip
|
||||||
|
\inferrule{\Gamma \vdash a \Rightarrow b \\ \Delta \entails a}{\Gamma \cup \Delta \entails b}
|
||||||
|
\end{column}
|
||||||
|
\begin{column}{.5\textwidth}
|
||||||
|
\textbf{SML function}\\\medskip
|
||||||
|
\texttt{val MP\ :\ thm -> thm -> thm}
|
||||||
|
$\texttt{MP} (\Gamma \vdash a \Rightarrow b) (\Delta \entails a) = (\Gamma \cup \Delta \entails b)$
|
||||||
|
\end{column}
|
||||||
|
\end{columns}
|
||||||
|
\end{exampleblock}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item very trustworthy --- only the small kernel needs to be trusted
|
||||||
|
\item efficient --- no need to store proofs
|
||||||
|
\begin{block}{Easy to extend and automate}
|
||||||
|
However complicated and potentially buggy your code is, if a value of type theorem is produced, it has been created through the small trusted interface. Therefore the statement really holds.
|
||||||
|
\end{block}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{LCF Style Systems}
|
||||||
|
|
||||||
|
There are now many interactive theorem provers out there that use
|
||||||
|
an approach similar to that of Edinburgh LCF.
|
||||||
|
\begin{itemize}
|
||||||
|
\item HOL family
|
||||||
|
\begin{itemize}
|
||||||
|
\item HOL theorem prover
|
||||||
|
\item HOL Light
|
||||||
|
\item HOL Zero
|
||||||
|
\item Proof Power
|
||||||
|
\item $\ldots$
|
||||||
|
\end{itemize}
|
||||||
|
\item Isabelle
|
||||||
|
\item Nuprl
|
||||||
|
\item Coq
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{History and Family of HOL}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{History of HOL}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item 1979 Edinburgh LCF by Milner, Gordon, et al.
|
||||||
|
\item 1981 Mike Gordon becomes lecturer in Cambridge
|
||||||
|
\item 1985 Cambridge LCF
|
||||||
|
\begin{itemize}
|
||||||
|
\item Larry Paulson and G\`{e}rard Huet
|
||||||
|
\item implementation of ML compiler
|
||||||
|
\item powerful simplifier
|
||||||
|
\item various improvements and extensions
|
||||||
|
\end{itemize}
|
||||||
|
\item 1988 HOL
|
||||||
|
\begin{itemize}
|
||||||
|
\item Mike Gordon and Keith Hanna
|
||||||
|
\item adaption of Cambridge LCF to classical higher order logic
|
||||||
|
\item intention: hardware verification
|
||||||
|
\end{itemize}
|
||||||
|
\item 1990 HOL90\\ reimplementation in SML by Konrad Slind at University of Calgary
|
||||||
|
\item 1998 HOL98\\ implementation in Moscow ML and new library and theory mechanism
|
||||||
|
\item since then HOL Kananaskis releases, called informally \alert{HOL~4}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Family of HOL}
|
||||||
|
\begin{columns}
|
||||||
|
\begin{column}{.65\textwidth}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \emph{ProofPower}\\commercial version of HOL88 by Roger Jones, Rob Arthan et al.
|
||||||
|
\item \emph{HOL Light}\\lean CAML / OCaml port by John Harrison
|
||||||
|
\item \emph{HOL Zero}\\trustworthy proof checker by Mark Adams
|
||||||
|
\item \emph{Isabelle}
|
||||||
|
\begin{itemize}
|
||||||
|
\item 1990 by Larry Paulson
|
||||||
|
\item meta-theorem prover that supports multiple logics
|
||||||
|
\item however, mainly HOL used, ZF a little
|
||||||
|
\item nowadays probably the most widely used HOL system
|
||||||
|
\item originally designed for software verification
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{column}
|
||||||
|
\qquad
|
||||||
|
\begin{column}{.3\textwidth}
|
||||||
|
\includegraphics[width=3.2cm]{images/hol-family}
|
||||||
|
\end{column}
|
||||||
|
\end{columns}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "hol"
|
||||||
|
%%% End:
|
267
lectures/04_hol_logic.tex
Normal file
267
lectures/04_hol_logic.tex
Normal file
@ -0,0 +1,267 @@
|
|||||||
|
\part{HOL's Logic}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
|
||||||
|
\section{HOL Logic}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Logic}
|
||||||
|
\begin{itemize}
|
||||||
|
\item the HOL theorem prover uses a version of classical \textbf{h}igher \textbf{o}rder \textbf{l}ogic:\\
|
||||||
|
classical higher order predicate calculus with \\
|
||||||
|
terms from the typed lambda calculus (\ie simple type theory)
|
||||||
|
\item this sounds complicated, but is intuitive for SML programmers
|
||||||
|
\item (S)ML and HOL logic designed to fit each other
|
||||||
|
\item if you understand SML, you understand HOL logic
|
||||||
|
|
||||||
|
\bigskip
|
||||||
|
\begin{center}
|
||||||
|
\emph{HOL = functional programming + logic}
|
||||||
|
\end{center}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\begin{alertblock}{Ambiguity Warning}
|
||||||
|
The acronym \textit{HOL} refers to both the \textit{HOL interactive theorem prover} and the \textit{HOL logic} used by it. It's also a common abbreviation for \textit{higher order logic} in general.
|
||||||
|
\end{alertblock}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Types}
|
||||||
|
\begin{itemize}
|
||||||
|
\item SML datatype for types
|
||||||
|
\begin{itemize}
|
||||||
|
\item \emph{Type Variables} ($\texttt{'a},\ \alpha,\ \texttt{'b},\ \beta,\ \ldots$)\\
|
||||||
|
Type variables are implicitly universally quantified. Theorems containing type variables
|
||||||
|
hold for all instantiations of these. Proofs using type variables can be seen as proof schemata.
|
||||||
|
\item \emph{Atomic Types} ($\texttt{c}$)\\
|
||||||
|
Atomic types denote fixed types. Examples: \texttt{num}, \texttt{bool}, \texttt{unit}
|
||||||
|
\item \emph{Compound Types} ($(\sigma_1, \ldots, \sigma_n) \textit{op}$)\\
|
||||||
|
\textit{op} is a \alert{type operator} of arity \textit{n} and $\sigma_1, \ldots, \sigma_n$ \alert{argument types}. Type operators denote operations for constructing types.\\
|
||||||
|
Examples: \texttt{num list} or \texttt{'a \# 'b}.
|
||||||
|
\item \emph{Function Types} ($\sigma_1 \to \sigma_2$)\\
|
||||||
|
$\sigma_1 \to \sigma_2$ is the type of \alert{total} functions from $\sigma_1$ to $\sigma_2$.
|
||||||
|
\end{itemize}
|
||||||
|
\item types are never empty in HOL, \ie\\
|
||||||
|
for each type at least one value exists
|
||||||
|
\item all HOL functions are total
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Terms}
|
||||||
|
\begin{itemize}
|
||||||
|
\item SML datatype for terms
|
||||||
|
\begin{itemize}
|
||||||
|
\item \emph{Variables} ($\texttt{x}, \texttt{y}, \ldots$)
|
||||||
|
\item \emph{Constants} ($\texttt{c}, \ldots$)
|
||||||
|
\item \emph{Function Application} ($\texttt{f a}$)
|
||||||
|
\item \emph{Lambda Abstraction} ($\texttt{\textbackslash x.\ f x}$\ \ or\ \ $\lambda x.\ f x$)\\
|
||||||
|
Lambda abstraction represents anonymous function definition.\\The corresponding SML syntax is \texttt{fn x => f x}.
|
||||||
|
\end{itemize}
|
||||||
|
\item terms have to be well-typed
|
||||||
|
\item same typing rules and same type-inference as in SML take place
|
||||||
|
\item terms very similar to SML expressions
|
||||||
|
\item notice: predicates are functions with return type \texttt{bool}, \ie
|
||||||
|
no distinction between functions and predicates, terms and formulae
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Terms II}
|
||||||
|
\begin{tabular}{lll}
|
||||||
|
\textbf{HOL term} & \textbf{SML expression} & \textbf{type HOL / SML} \\
|
||||||
|
\hol{0} & \ml{0} & \hol{num} / \ml{int} \\
|
||||||
|
\hol{x:'a} & \ml{x:'a} & variable of type \hol{'a} \\
|
||||||
|
\hol{x:bool} & \ml{x:bool} & variable of type \hol{bool} \\
|
||||||
|
\hol{x + 5} & \ml{x + 5} & applying function \hol{+} to \hol{x} and \hol{5} \\
|
||||||
|
\hol{\textbackslash x.\ x + 5} & \ml{fn x => x + 5} & anonymous (\aka inline) function \\
|
||||||
|
& & of type \hol{num -> num} \\
|
||||||
|
\hol{(5, T)} & \ml{(5, true)} & \hol{num \# bool} / \ml{int * bool}\\
|
||||||
|
\hol{[5;3;2]++[6]} & \ml{[5,3,2]@[6]} & \hol{num list} / \ml{int list}
|
||||||
|
\end{tabular}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Free and Bound Variables / Alpha Equivalence}
|
||||||
|
\begin{itemize}
|
||||||
|
\item in SML, the names of function arguments does not matter (much)
|
||||||
|
\item similarly in HOL, the names of variables used by lambda-abstractions does not matter (much)
|
||||||
|
\item the lambda-expression $\lambda x.\ t$ is said to \emph{bind} the variables $x$ in term $t$
|
||||||
|
\item variables that are guarded by a lambda expression are called \emph{bound}
|
||||||
|
\item all other variables are \emph{free}
|
||||||
|
\item Example: $x$ is free and $y$ is bound in \hol{$(x = 5) \wedge (\lambda y.\ (y < x))\ 3$}
|
||||||
|
\item the names of bound variables are unimportant semantically
|
||||||
|
\item two terms are called \emph{alpha-equivalent} iff they differ only in the names of bound variables
|
||||||
|
\item Example: \hol{$\lambda{}x.\ x$} and \hol{$\lambda{}y.\ y$} are alpha-equivalent
|
||||||
|
\item Example: \hol{$x$} and \hol{$y$} are not alpha-equivalent
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Theorems}
|
||||||
|
\begin{itemize}
|
||||||
|
\item theorems are of the form $\Gamma \entails p$ where
|
||||||
|
\begin{itemize}
|
||||||
|
\item $\Gamma$ is a set of hypothesis
|
||||||
|
\item $p$ is the conclusion of the theorem
|
||||||
|
\item all elements of $\Gamma$ and $p$ are formulae, \ie terms of type \texttt{bool}
|
||||||
|
\end{itemize}
|
||||||
|
\item $\Gamma \entails p$ records that using $\Gamma$ the statement $p$ \alert{has been} proved\\
|
||||||
|
\item notice difference to logic: there it means \alert{can be} proved
|
||||||
|
\item the proof itself is not recorded
|
||||||
|
\item theorems can only be created through a small interface in the \emph{kernel}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Kernel}
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Light Kernel}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item the HOL kernel is hard to explain
|
||||||
|
\begin{itemize}
|
||||||
|
\item for historic reasons some concepts are represented rather complicated
|
||||||
|
\item for speed reasons some derivable concepts have been added
|
||||||
|
\end{itemize}
|
||||||
|
\item instead consider the HOL Light kernel, which is a cleaned-up version
|
||||||
|
\item there are two predefined constants
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{= :\ 'a -> 'a -> bool}
|
||||||
|
\item \texttt{@ :\ ('a -> bool) -> 'a}
|
||||||
|
\end{itemize}
|
||||||
|
\item there are two predefined types
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{bool}
|
||||||
|
\item \texttt{ind}
|
||||||
|
\end{itemize}
|
||||||
|
\item the meaning of these types and constants is given by inference rules and axioms
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Light Inferences I}
|
||||||
|
\begin{columns}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule* [right=REFL] {\ }{\entails t = t}$\\[1em]
|
||||||
|
$\inferrule*[right=TRANS] {\Gamma \entails s = t\\\Delta \entails t = u}{\Gamma \cup \Delta \entails s = u}$\\[1em]
|
||||||
|
$\inferrule*[right=COMB]{\Gamma \entails s = t\\\Delta \entails u = v \\\\ \textit{types fit}}{\Gamma \cup \Delta \entails s(u) = t(v)}$\\[1em]
|
||||||
|
\end{center}
|
||||||
|
\end{column}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right=ABS]{\Gamma \entails s = t\\x\ \textit{not free in}\ \Gamma}{\Gamma \entails \lambda{}x.\ s = \lambda{}x.\ t}$\\[1em]
|
||||||
|
$\inferrule*[right=BETA]{\ }{\entails (\lambda{}x.\ t)\, x = t}$\\[1em]
|
||||||
|
$\inferrule*[right=ASSUME]{\ }{\{p\}\entails p}$
|
||||||
|
\end{center}
|
||||||
|
\end{column}
|
||||||
|
\end{columns}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Light Inferences II}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right=EQ\_MP]{\Gamma \entails p \Leftrightarrow q\\\Delta \entails p}{\Gamma \cup \Delta \entails q}$\\[1em]
|
||||||
|
$\inferrule*[right=DEDUCT\_ANTISYM\_RULE]{\Gamma \entails p\\\Delta \entails q}
|
||||||
|
{(\Gamma-\{q\}) \cup (\Delta - \{p\}) \entails p \Leftrightarrow q}$\\[1em]
|
||||||
|
$\inferrule*[right=INST]{\Gamma[x_1, \ldots, x_n] \entails p[x_1, \ldots, x_n]}
|
||||||
|
{\Gamma[t_1, \ldots, t_n] \entails p[t_1, \ldots, t_n]}$\\[1em]
|
||||||
|
$\inferrule*[right=INST\_TYPE]{\Gamma[\alpha_1, \ldots, \alpha_n] \entails p[\alpha_1, \ldots, \alpha_n]}
|
||||||
|
{\Gamma[\gamma_1, \ldots, \gamma_n] \entails p[\gamma_1, \ldots, \gamma_n]}$\\[1em]
|
||||||
|
\end{center}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\newcommand{\tabitem}{~~\llap{\textbullet}~~}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Light Axioms and Definition Principles}
|
||||||
|
\begin{itemize}
|
||||||
|
\item 3 axioms needed\medskip\\\qquad
|
||||||
|
\begin{tabular}{ll}
|
||||||
|
ETA\_AX & $|- (\lambda{}x.\ t\ x) = t$ \\
|
||||||
|
SELECT\_AX & $|- P\ x \Longrightarrow P ((@) P))$ \\
|
||||||
|
INFINITY\_AX & predefined type \texttt{ind} is infinite
|
||||||
|
\end{tabular}
|
||||||
|
\item definition principle for constants
|
||||||
|
\begin{itemize}
|
||||||
|
\item constants can be introduced as abbreviations
|
||||||
|
\item constraint: no free vars and no new type vars
|
||||||
|
\end{itemize}
|
||||||
|
\item definition principle for types
|
||||||
|
\begin{itemize}
|
||||||
|
\item new types can be defined as non-empty subtypes of existing types
|
||||||
|
\end{itemize}
|
||||||
|
\item both principles
|
||||||
|
\begin{itemize}
|
||||||
|
\item lead to conservative extensions
|
||||||
|
\item preserve consistency
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Light derived concepts}
|
||||||
|
Everything else is derived from this small kernel.
|
||||||
|
\[
|
||||||
|
\begin{array}{ccl}
|
||||||
|
T & =_{\textit{def}} & (\lambda{}p.\ p) = (\lambda{}p.\ p)\\
|
||||||
|
\wedge & =_{\textit{def}} & \lambda{}p\,q.\ (\lambda f.\ f\ p\ q) = (\lambda{}f.\ f\ T\ T) \\
|
||||||
|
\Longrightarrow & =_{\textit{def}} & \lambda{}p\,q.\ (p \wedge q \Leftrightarrow p) \\
|
||||||
|
\forall & =_{\textit{def}} & \lambda{}P.\ (P = \lambda{}x.\ T) \\
|
||||||
|
\exists & =_{\textit{def}} & \lambda{}P.\ (\forall{}q.\ (\forall{}x.\ P(x) \Longrightarrow q) \Longrightarrow q) \\
|
||||||
|
\ldots \\
|
||||||
|
\end{array}
|
||||||
|
\]
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Multiple Kernels}
|
||||||
|
\begin{itemize}
|
||||||
|
\item Kernel defines abstract datatypes for types, terms and theorems
|
||||||
|
\item one does not need to look at the internal implementation
|
||||||
|
\item therefore, easy to exchange
|
||||||
|
\item there are at least 3 different kernels for HOL
|
||||||
|
\begin{itemize}
|
||||||
|
\item standard kernel (de Bruijn indices)
|
||||||
|
\item experimental kernel (name / type pairs)
|
||||||
|
\item OpenTheory kernel (for proof recording)
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{HOL Logic Summary}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Logic Summary}
|
||||||
|
\begin{itemize}
|
||||||
|
\item HOL theorem prover uses classical higher order logic
|
||||||
|
\item HOL logic is very similar to SML
|
||||||
|
\begin{itemize}
|
||||||
|
\item syntax
|
||||||
|
\item type system
|
||||||
|
\item type inference
|
||||||
|
\end{itemize}
|
||||||
|
\item HOL theorem prover very trustworthy because of LCF approach
|
||||||
|
\begin{itemize}
|
||||||
|
\item there is a small kernel
|
||||||
|
\item proofs are not stored explicitly
|
||||||
|
\end{itemize}
|
||||||
|
\item you don't need to know the details of the kernel
|
||||||
|
\item usually one works at a much higher level of abstraction
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
181
lectures/05_usage.tex
Normal file
181
lectures/05_usage.tex
Normal file
@ -0,0 +1,181 @@
|
|||||||
|
\part{Basic HOL Usage}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Technical Usage Issues}
|
||||||
|
\begin{itemize}
|
||||||
|
\item practical issues are discussed in practical sessions
|
||||||
|
\begin{itemize}
|
||||||
|
\item how to install HOL
|
||||||
|
\item which key-combinations to use in emacs-mode
|
||||||
|
\item detailed signature of libraries and theories
|
||||||
|
\item all parameters and options of certain tools
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\item exercise sheets sometimes
|
||||||
|
\begin{itemize}
|
||||||
|
\item ask to read some documentation
|
||||||
|
\item provide examples
|
||||||
|
\item list references where to get additional information
|
||||||
|
\end{itemize}
|
||||||
|
\item if you have problems, ask me outside lecture (\href{mailto:thomas@tuerk-brechen.de}{thomas@tuerk-brechen.de})
|
||||||
|
\item covered only very briefly in lectures
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Installing HOL}
|
||||||
|
\begin{itemize}
|
||||||
|
\item webpage: \url{https://hol-theorem-prover.org}
|
||||||
|
\item HOL supports two SML implementations
|
||||||
|
\begin{itemize}
|
||||||
|
\item Moscow ML (\url{http://mosml.org})
|
||||||
|
\item \alert{PolyML} (\url{http://www.polyml.org})
|
||||||
|
\end{itemize}
|
||||||
|
\item I recommend using PolyML
|
||||||
|
\item please use emacs with
|
||||||
|
\begin{itemize}
|
||||||
|
\item hol-mode
|
||||||
|
\item sml-mode
|
||||||
|
\item hol-unicode, if you want to type Unicode
|
||||||
|
\end{itemize}
|
||||||
|
\item please install recent revision from git repo or Kananaskis 11 release
|
||||||
|
\item documentation found on HOL webpage and with sources
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{General Architecture}
|
||||||
|
\begin{itemize}
|
||||||
|
\item HOL is a collection of SML modules
|
||||||
|
\item starting HOL starts a SML Read-Eval-Print-Loop (REPL) with
|
||||||
|
\begin{itemize}
|
||||||
|
\item some HOL modules loaded
|
||||||
|
\item some default modules opened
|
||||||
|
\item an input wrapper to help parsing terms called \texttt{unquote}
|
||||||
|
\end{itemize}
|
||||||
|
\item \texttt{unquote} provides special quotes for terms and types
|
||||||
|
\begin{itemize}
|
||||||
|
\item implemented as input filter
|
||||||
|
\item \hol{``my-term``\ } becomes \ml{Parse.Term [QUOTE "my-term"]}
|
||||||
|
\item \hol{``:my-type``} becomes \ml{Parse.Type [QUOTE ":my-type"]}
|
||||||
|
\end{itemize}
|
||||||
|
\item main interfaces
|
||||||
|
\begin{itemize}
|
||||||
|
\item \emph{emacs} (used in the course)
|
||||||
|
\item vim
|
||||||
|
\item bare shell
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Filenames}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \emph{\texttt{*Script.sml}} --- HOL proof script file
|
||||||
|
\begin{itemize}
|
||||||
|
\item script files contain definitions and proof scripts
|
||||||
|
\item executing them results in HOL searching and checking proofs
|
||||||
|
\item this might take very long
|
||||||
|
\item resulting theorems are stored in \texttt{*Theory.\{sml|sig\}} files
|
||||||
|
\end{itemize}
|
||||||
|
\item \emph{\texttt{*Theory.\{sml|sig\}}} --- HOL theory\\
|
||||||
|
\begin{itemize}
|
||||||
|
\item auto-generated by corresponding script file
|
||||||
|
\item load quickly, because they don't search/check proofs
|
||||||
|
\item do not edit theory files
|
||||||
|
\end{itemize}
|
||||||
|
\item \emph{\texttt{*Syntax.\{sml|sig\}}} --- syntax libraries \\
|
||||||
|
\begin{itemize}
|
||||||
|
\item contain syntax related functions
|
||||||
|
\item \ie functions to construct and destruct terms and types
|
||||||
|
\end{itemize}
|
||||||
|
\item \emph{\texttt{*Lib.\{sml|sig\}}} --- general libraries
|
||||||
|
\item \emph{\texttt{*Simps.\{sml|sig\}}} --- simplifications
|
||||||
|
\item \emph{\texttt{selftest.sml}} --- selftest for current directory
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Directory Structure}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \emph{\texttt{bin}} --- HOL binaries
|
||||||
|
\item \emph{\texttt{src}} --- HOL sources
|
||||||
|
\item \emph{\texttt{examples}} --- HOL examples
|
||||||
|
\begin{itemize}
|
||||||
|
\item interesting projects by various people
|
||||||
|
\item examples owned by their developer
|
||||||
|
\item coding style and level of maintenance differ a lot
|
||||||
|
\end{itemize}
|
||||||
|
\item \emph{\texttt{help}} --- sources for reference manual
|
||||||
|
\begin{itemize}
|
||||||
|
\item after compilation home of reference HTML page
|
||||||
|
\end{itemize}
|
||||||
|
\item \emph{\texttt{Manual}} --- HOL manuals
|
||||||
|
\begin{itemize}
|
||||||
|
\item Tutorial
|
||||||
|
\item Description
|
||||||
|
\item Reference (PDF version)
|
||||||
|
\item Interaction
|
||||||
|
\item Quick (cheat pages)
|
||||||
|
\item Style-guide
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Unicode}
|
||||||
|
\begin{itemize}
|
||||||
|
\item HOL supports both Unicode and pure ASCII input and output
|
||||||
|
\item advantages of Unicode compared to ASCII
|
||||||
|
\begin{itemize}
|
||||||
|
\item easier to read (good fonts provided)
|
||||||
|
\item no need to learn special ASCII syntax
|
||||||
|
\end{itemize}
|
||||||
|
\item disadvanges of Unicode compared to ASCII
|
||||||
|
\begin{itemize}
|
||||||
|
\item harder to type (even with \texttt{hol-unicode.el})
|
||||||
|
\item less portable between systems
|
||||||
|
\end{itemize}
|
||||||
|
\item whether you like Unicode is highly a matter of personal taste
|
||||||
|
\item HOL's policy
|
||||||
|
\begin{itemize}
|
||||||
|
\item no Unicode in HOL's source directory \texttt{src}
|
||||||
|
\item Unicode in examples directory \texttt{examples} is fine
|
||||||
|
\end{itemize}
|
||||||
|
\item I recommend turning Unicode output off initially
|
||||||
|
\begin{itemize}
|
||||||
|
\item this simplifies learning the ASCII syntax
|
||||||
|
\item no need for special fonts
|
||||||
|
\item it is easier to copy and paste terms from HOL's output
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Where to find help?}
|
||||||
|
\begin{itemize}
|
||||||
|
\item reference manual
|
||||||
|
\begin{itemize}
|
||||||
|
\item available as HTML pages, single PDF file and in-system help
|
||||||
|
\end{itemize}
|
||||||
|
\item description manual
|
||||||
|
\item Style-guide (still under development)
|
||||||
|
\item HOL webpage (\url{https://hol-theorem-prover.org})
|
||||||
|
\item mailing-list \texttt{hol-info}
|
||||||
|
\item \ml{DB.match} and \ml{DB.find}
|
||||||
|
\item \ml{*Theory.sig} and \ml{selftest.sml} files
|
||||||
|
\item ask someone, \eg me :-) (\href{mailto:thomas@tuerk-brechen.de}{thomas@tuerk-brechen.de})
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
419
lectures/06_forward_proofs.tex
Normal file
419
lectures/06_forward_proofs.tex
Normal file
@ -0,0 +1,419 @@
|
|||||||
|
\part{Forward Proofs}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Kernel too detailed}
|
||||||
|
\begin{itemize}
|
||||||
|
\item we already discussed the HOL Logic
|
||||||
|
\item the kernel itself does not even contain basic logic operators
|
||||||
|
\item usually one uses a much higher level of abstraction
|
||||||
|
\begin{itemize}
|
||||||
|
\item many operations and datatypes are defined
|
||||||
|
\item high-level derived inference rules are used
|
||||||
|
\end{itemize}
|
||||||
|
\item let's now look at this more common abstraction level
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Term Syntax}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Common Terms and Types}
|
||||||
|
|
||||||
|
\begin{tabular}{lcc}
|
||||||
|
& \emph{Unicode} & \emph{ASCII} \\
|
||||||
|
type vars & \hol{$\alpha$}, \hol{$\beta$}, \ldots & \hol{'a}, \hol{'b}, \ldots \\
|
||||||
|
type annotated term & \hol{term:type} & \hol{term:type} \\
|
||||||
|
true & \hol{T} & \hol{T} \\
|
||||||
|
false & \hol{F} & \hol{F} \\
|
||||||
|
negation & \hol{$\neg$b} & \hol{\holNeg{}b} \\
|
||||||
|
conjunction & \hol{b1\ $\wedge$\ b2} & \hol{b1 \holAnd{} b2} \\
|
||||||
|
disjunction & \hol{b1\ $\vee$\ b2} & \hol{b1 \holOr{} b2} \\
|
||||||
|
implication & \hol{b1\ $\Longrightarrow$\ b2} & \hol{b1 \holImp{} b2} \\
|
||||||
|
equivalence & \hol{b1\ $\Longleftrightarrow$\ b2} & \hol{b1 \holEquiv{} b2} \\
|
||||||
|
disequation & \hol{v1\ $\neq$\ v2} & \hol{v1 <> v2} \\
|
||||||
|
all-quantification & \hol{$\forall$x.\ P x} & \hol{!x.\ P x} \\
|
||||||
|
existential quantification & \hol{$\exists$x.\ P x} & \hol{?x.\ P x} \\
|
||||||
|
Hilbert's choice operator & \hol{@x.\ P x} & \hol{@x.\ P x}
|
||||||
|
\end{tabular}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
There are similar restrictions to constant and variable names as in SML.\\
|
||||||
|
HOL specific: don't start variable names with an underscore
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Syntax conventions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item common function syntax
|
||||||
|
\begin{itemize}
|
||||||
|
\item prefix notation, \eg \hol{SUC x}
|
||||||
|
\item infix notation, \eg \hol{x + y}
|
||||||
|
\item quantifier notation, \eg \hol{$\forall$x.\ P x} means \hol{($\forall$)\ ($\lambda$x.\ P x)}
|
||||||
|
\end{itemize}
|
||||||
|
\item infix and quantifier notation can be turned into prefix notation \\
|
||||||
|
Example: \hol{(+)\ x\ y} and \hol{\$+\ x\ y} are the same as \hol{x + y}
|
||||||
|
\item quantifiers of the same type don't need to be repeated \\
|
||||||
|
Example:\
|
||||||
|
\hol{$\forall$x\ y.\ P\ x\ y} is short for
|
||||||
|
\hol{$\forall$x.\ $\forall$y.\ P\ x\ y}
|
||||||
|
\item there is special syntax for some functions\\
|
||||||
|
Example:\
|
||||||
|
\hol{if c then v1 else v2} is nice syntax for
|
||||||
|
\hol{COND c v1 v2}
|
||||||
|
\item associative infix operators are usually right-associative\\
|
||||||
|
Example:\
|
||||||
|
\hol{b1 \holAnd{} b2 \holAnd{} b3} is parsed as
|
||||||
|
\hol{b1 \holAnd{} (b2 \holAnd{} b3)}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Creating Terms}
|
||||||
|
|
||||||
|
\begin{block}{Term Parser}
|
||||||
|
Use special quotation provided by \texttt{unquote}.
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{alertblock}{Operator Precedence}
|
||||||
|
It is easy to misjudge the binding strength of certain operators. Therefore use plenty of parenthesis.
|
||||||
|
\end{alertblock}
|
||||||
|
|
||||||
|
\begin{block}{Use Syntax Functions}
|
||||||
|
Terms are just SML values of type \texttt{term}. You can use syntax functions (usually defined in \texttt{*Syntax.sml} files) to create them.
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Creating Terms II}
|
||||||
|
\begin{tabular}{lll}
|
||||||
|
\emph{Parser} & \emph{Syntax Funs} & \\
|
||||||
|
\hol{``:bool``} & \ml{mk\_type ("bool", [])} or \ml{bool} & type of Booleans \\
|
||||||
|
\hol{``T``} & \ml{mk\_const ("T", bool)} or \ml{T} & term true \\
|
||||||
|
\hol{``\holNeg{}b``} & \hol{mk\_neg (} & negation of \\
|
||||||
|
& \hol{\ \ mk\_var ("b", bool))} & \ \ Boolean var b\\
|
||||||
|
\hol{``\ldots\ \holAnd{} \ldots``} & \hol{mk\_conj (\ldots, \ldots)} & conjunction \\
|
||||||
|
\hol{``\ldots\ \holOr{} \ldots``} & \hol{mk\_disj (\ldots, \ldots)} & disjunction \\
|
||||||
|
\hol{``\ldots\ \holImp{} \ldots``} & \hol{mk\_imp (\ldots, \ldots)} & implication \\
|
||||||
|
\hol{``\ldots\ = \ldots``} & \hol{mk\_eq (\ldots, \ldots)} & equation \\
|
||||||
|
\hol{``\ldots\ <=> \ldots``} & \hol{mk\_eq (\ldots, \ldots)} & equivalence \\
|
||||||
|
\hol{``\ldots\ <> \ldots``} & \hol{mk\_neg (mk\_eq (\ldots, \ldots))} & negated equation
|
||||||
|
\end{tabular}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Inference Rules}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Inference Rules for Equality}
|
||||||
|
|
||||||
|
\begin{columns}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right=REFL] {\ }{\entails t = t}$\\[1em]
|
||||||
|
$\inferrule*[right=ABS]{\Gamma \entails s = t\\x\ \textit{not free in}\ \Gamma}{\Gamma \entails \lambda{}x.\ s = \lambda{}x. t}$\\[1em]
|
||||||
|
$\inferrule*[right=MK\_COMB]{\Gamma \entails s = t\\\Delta \entails u = v \\\\ \textit{types fit}}{\Gamma \cup \Delta \entails s(u) = t(v)}$\\[1em]
|
||||||
|
\end{center}
|
||||||
|
\end{column}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule* [right={GSYM}] {\Gamma \entails s = t}{\Gamma \entails t = s}$\\[1em]
|
||||||
|
$\inferrule*[right=TRANS] {\Gamma \entails s = t\\\Delta \entails t = u}{\Gamma \cup \Delta \entails s = u}$\\[1em]
|
||||||
|
$\inferrule*[right=EQ\_MP]{\Gamma \entails p \Leftrightarrow q\\\Delta \entails p}{\Gamma \cup \Delta \entails q}$\\[1em]
|
||||||
|
$\inferrule*[right=BETA\_CONV]{\ }{\entails (\lambda{}x.\ t) v = t[v/x]}$\\[1em]
|
||||||
|
\end{center}
|
||||||
|
\end{column}
|
||||||
|
\end{columns}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Inference Rules for free Variables}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right=INST]{\Gamma[x_1, \ldots, x_n] \entails p[x_1, \ldots, x_n]}
|
||||||
|
{\Gamma[t_1, \ldots, t_n] \entails p[t_1, \ldots, t_n]}$\\[1em]
|
||||||
|
$\inferrule*[right=INST\_TYPE]{\Gamma[\alpha_1, \ldots, \alpha_n] \entails p[\alpha_1, \ldots, \alpha_n]}
|
||||||
|
{\Gamma[\gamma_1, \ldots, \gamma_n] \entails p[\gamma_1, \ldots, \gamma_n]}$\\[1em]
|
||||||
|
\end{center}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Inference Rules for Implication}
|
||||||
|
|
||||||
|
\begin{columns}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right={MP, MATCH\_MP}]{\Gamma \entails p \Longrightarrow q\\\Delta \entails p}{\Gamma \cup \Delta \entails q}$\\[1em]
|
||||||
|
$\inferrule*[right=EQ\_IMP\_RULE] {\Gamma \entails p = q}{\Gamma \entails p \Longrightarrow q\\\\\Gamma \entails q \Longrightarrow p}$\\[1em]
|
||||||
|
$\inferrule*[right=IMP\_ANTISYM\_RULE]{\Gamma \entails p \Longrightarrow q\\\\\Delta \entails q \Longrightarrow p}{\Gamma \cup \Delta \entails p = q}$\\[1em]
|
||||||
|
$\inferrule*[right=IMP\_TRANS] {\Gamma \entails p \Longrightarrow q\\\Delta \entails q \Longrightarrow r}{\Gamma \cup \Delta \entails p \Longrightarrow r}$\\[1em]
|
||||||
|
\end{center}
|
||||||
|
\end{column}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right=DISCH]{\Gamma \entails p}{\Gamma - \{q\} \entails q \Longrightarrow p}$\\[1em]
|
||||||
|
$\inferrule*[right=UNDISCH]{\Gamma \entails q \Longrightarrow p}{\Gamma \cup \{q\} \entails p}$\\[1em]
|
||||||
|
$\inferrule*[right=NOT\_INTRO]{\Gamma \entails p \Longrightarrow \text{F}}{\Gamma \entails \holNeg{}p}$\\[1em]
|
||||||
|
$\inferrule*[right=NOT\_ELIM]{\Gamma \entails \holNeg{}p}{\Gamma \entails p \Longrightarrow \text{F}}$\\[1em]
|
||||||
|
\end{center}
|
||||||
|
\end{column}
|
||||||
|
\end{columns}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Inference Rules for Conjunction / Disjunction}
|
||||||
|
|
||||||
|
\begin{columns}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right={CONJ}]{\Gamma \entails p\\\Delta \entails q}{\Gamma \cup \Delta \entails p\ \wedge\ q}$\\[1em]
|
||||||
|
$\inferrule*[right={CONJUNCT1}]{\Gamma \entails p\ \wedge\ q}{\Gamma \entails p}$\\[1em]
|
||||||
|
$\inferrule*[right={CONJUNCT2}]{\Gamma \entails p\ \wedge\ q}{\Gamma \entails q}$\\[1em]
|
||||||
|
\end{center}
|
||||||
|
\end{column}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right={DISJ1}]{\Gamma \entails p}{\Gamma \entails p\ \vee\ q}$\\[1em]
|
||||||
|
$\inferrule*[right={DISJ2}]{\Gamma \entails q}{\Gamma \entails p\ \vee\ q}$\\[1em]
|
||||||
|
$\inferrule*[right={DISJ\_CASES}]{\Gamma \entails p \vee q\\\Delta_1 \cup \{p\} \entails r\\\Delta_2 \cup \{q\} \entails r}{\Gamma \cup \Delta_1 \cup \Delta_2 \entails r}$\\[1em]
|
||||||
|
\end{center}
|
||||||
|
\end{column}
|
||||||
|
\end{columns}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Inference Rules for Quantifiers}
|
||||||
|
|
||||||
|
\begin{columns}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right={GEN}]{\Gamma \entails p\\x \text{\ not free in\ }\Gamma}{\Gamma \entails \forall{}x.\ p}$\\[1em]
|
||||||
|
$\inferrule*[right={SPEC}]{\Gamma \entails \forall{}x.\ p}{\Gamma \entails p[u/x]}$\\[1em]
|
||||||
|
\end{center}
|
||||||
|
\end{column}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right={EXISTS}]{\Gamma \entails p[u/x]}{\Gamma \entails \exists{}x.\ p}$\\[1em]
|
||||||
|
$\inferrule*[right={CHOOSE}]{\Gamma \entails \exists{}x.\ p\\\Delta \cup \{p[u/x]\} \entails r\\
|
||||||
|
u \text{\ not free in\ } \Gamma, \Delta, p \text{ and } r}
|
||||||
|
{\Gamma \cup \Delta \entails r}$\\[1em]
|
||||||
|
\end{center}
|
||||||
|
\end{column}
|
||||||
|
\end{columns}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\section{Forward Proofs}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Forward Proofs}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item axioms and inference rules are used to derive theorems
|
||||||
|
\item this method is called \emph{forward proof}
|
||||||
|
\begin{itemize}
|
||||||
|
\item one starts with basic building blocks
|
||||||
|
\item one moves step by step forward
|
||||||
|
\item finally the theorem one is interested in is derived
|
||||||
|
\end{itemize}
|
||||||
|
\item one can also implement own proof tools
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Forward Proofs --- Example I}
|
||||||
|
|
||||||
|
Let's prove $\forall{}p.\ p \Longrightarrow p$.
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\begin{columns}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
val IMP_REFL_THM = let
|
||||||
|
val tm1 = ``p:bool``;
|
||||||
|
val thm1 = ASSUME tm1;
|
||||||
|
val thm2 = DISCH tm1 thm1;
|
||||||
|
in
|
||||||
|
GEN tm1 thm2
|
||||||
|
|
||||||
|
end
|
||||||
|
|
||||||
|
fun IMP_REFL t =
|
||||||
|
SPEC t IMP_REFL_THM;
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{column}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
|
||||||
|
> val tm1 = ``p``: term
|
||||||
|
> val thm1 = [p] |- p: thm
|
||||||
|
> val thm2 = |- p ==> p: thm
|
||||||
|
|
||||||
|
> val IMP_REFL_THM =
|
||||||
|
|- !p. p ==> p: thm
|
||||||
|
|
||||||
|
|
||||||
|
> val IMP_REFL =
|
||||||
|
fn: term -> thm
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{column}
|
||||||
|
\end{columns}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Forward Proofs --- Example II}
|
||||||
|
|
||||||
|
Let's prove $\forall{}P\,v.\ (\exists{}x.\ (x = v) \wedge P\ x) \Longleftrightarrow P\ v$.
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\begin{columns}
|
||||||
|
\scriptsize
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
val tm_v = ``v:'a``;
|
||||||
|
val tm_P = ``P:'a -> bool``;
|
||||||
|
val tm_lhs = ``?x. (x = v) \holAnd{} P x``
|
||||||
|
val tm_rhs = mk_comb (tm_P, tm_v);
|
||||||
|
|
||||||
|
val thm1 = let
|
||||||
|
val thm1a = ASSUME tm_rhs;
|
||||||
|
val thm1b =
|
||||||
|
CONJ (REFL tm_v) thm1a;
|
||||||
|
val thm1c =
|
||||||
|
EXISTS (tm_lhs, tm_v) thm1b
|
||||||
|
in
|
||||||
|
DISCH tm_rhs thm1c
|
||||||
|
end
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{column}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
> val thm1a = [P v] |- P v: thm
|
||||||
|
> val thm1b =
|
||||||
|
[P v] |- (v = v) \holAnd{} P v: thm
|
||||||
|
> val thm1c =
|
||||||
|
[P v] |- ?x. (x = v) \holAnd{} P x
|
||||||
|
|
||||||
|
> val thm1 = [] |-
|
||||||
|
P v ==> ?x. (x = v) \holAnd{} P x: thm
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{column}
|
||||||
|
\end{columns}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Forward Proofs --- Example II cont.}
|
||||||
|
|
||||||
|
\begin{columns}
|
||||||
|
\scriptsize
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
val thm2 = let
|
||||||
|
val thm2a =
|
||||||
|
ASSUME ``(u:'a = v) \holAnd{} P u``
|
||||||
|
val thm2b = AP_TERM tm_P
|
||||||
|
(CONJUNCT1 thm2a);
|
||||||
|
val thm2c = EQ_MP thm2b
|
||||||
|
(CONJUNCT2 thm2a);
|
||||||
|
val thm2d =
|
||||||
|
CHOOSE (``u:'a``,
|
||||||
|
ASSUME tm_lhs) thm2c
|
||||||
|
in
|
||||||
|
DISCH tm_lhs thm2d
|
||||||
|
end
|
||||||
|
|
||||||
|
|
||||||
|
val thm3 = IMP_ANTISYM_RULE thm2 thm1
|
||||||
|
|
||||||
|
val thm4 = GENL [tm_P, tm_v] thm3
|
||||||
|
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{column}
|
||||||
|
\begin{column}{.45\textwidth}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
|
||||||
|
> val thm2a = [(u = v) \holAnd{} P u] |-
|
||||||
|
(u = v) \holAnd{} P u: thm
|
||||||
|
> val thm2b = [(u = v) \holAnd{} P u] |-
|
||||||
|
P u <=> P v
|
||||||
|
> val thm2c = [(u = v) \holAnd{} P u] |-
|
||||||
|
P v
|
||||||
|
> val thm2d = [?x. (x = v) \holAnd{} P x] |-
|
||||||
|
P v
|
||||||
|
|
||||||
|
|
||||||
|
> val thm2 = [] |-
|
||||||
|
?x. (x = v) \holAnd{} P x ==> P v
|
||||||
|
|
||||||
|
|
||||||
|
> val thm3 = [] |-
|
||||||
|
?x. (x = v) \holAnd{} P x <=> P v
|
||||||
|
> val thm4 = [] |- !P v.
|
||||||
|
?x. (x = v) \holAnd{} P x <=> P v
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{column}
|
||||||
|
\end{columns}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
% \section{Rules and Conversions}
|
||||||
|
% \begin{frame}
|
||||||
|
% \frametitle{Derived Tools}
|
||||||
|
% \begin{itemize}
|
||||||
|
% \item HOL lives from implementing reasoning tools in SML
|
||||||
|
% \item \emph{rules} --- use theorems to produce new theorems\\
|
||||||
|
% \begin{itemize}
|
||||||
|
% \item SML-type \ml{thm -> thm}
|
||||||
|
% \item functions with similar type often called rule as well
|
||||||
|
% \end{itemize}
|
||||||
|
% \item \emph{conversions} --- convert a term into an equal one\\
|
||||||
|
% \begin{itemize}
|
||||||
|
% \item SML-type \ml{term -> thm}
|
||||||
|
% \item given term \ml{t} produces theorem of form \ml{[] |- t = t'}
|
||||||
|
% \item may raise exceptions \ml{HOL\_ERR} or \ml{UNCHANGED}
|
||||||
|
% \end{itemize}
|
||||||
|
% \item \ldots
|
||||||
|
% \end{itemize}
|
||||||
|
% \end{frame}
|
||||||
|
|
||||||
|
% \begin{frame}
|
||||||
|
% \frametitle{Conversions}
|
||||||
|
|
||||||
|
% \begin{itemize}
|
||||||
|
% \item HOL has very good tool support for equality reasoning
|
||||||
|
% \item \emph{conversions} are important for HOL's automation
|
||||||
|
% \item there is a lot of infrastructure for conversions
|
||||||
|
% \begin{itemize}
|
||||||
|
% \item \ml{RAND\_CONV}, \ml{RATOR\_CONV}, \ml{ABS\_CONV}
|
||||||
|
% \item \ml{DEPTH\_CONV}
|
||||||
|
% \item \ml{THENC}, \ml{TRY\_CONV}, \ml{FIRST\_CONV}
|
||||||
|
% \item \ml{REPEAT\_CONV}
|
||||||
|
% \item \ml{CHANGED\_CONV}, \ml{QCHANGED\_CONV}
|
||||||
|
% \item \ml{NO\_CONV}, \ml{ALL\_CONV}
|
||||||
|
% \item \ldots
|
||||||
|
% \end{itemize}
|
||||||
|
% \item important conversions
|
||||||
|
% \begin{itemize}
|
||||||
|
% \item \ml{REWR\_CONV}
|
||||||
|
% \item \ml{REWRITE\_CONV}
|
||||||
|
% \item \ldots
|
||||||
|
% \end{itemize}
|
||||||
|
% \end{itemize}
|
||||||
|
% \end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
564
lectures/07_backward_proofs.tex
Normal file
564
lectures/07_backward_proofs.tex
Normal file
@ -0,0 +1,564 @@
|
|||||||
|
\part{Backward Proofs}
|
||||||
|
|
||||||
|
\section{Motivation}
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Motivation I}
|
||||||
|
\begin{itemize}
|
||||||
|
\item let's prove \hol{!A B. A \holAnd{} B <=> B \holAnd{} A}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize
|
||||||
|
\mlcomment{Show |- A \holAnd{} B ==> B \holAnd{} A}
|
||||||
|
val thm1a = ASSUME ``A \holAnd{} B``;
|
||||||
|
val thm1b = CONJ (CONJUNCT2 thm1a) (CONJUNCT1 thm1a);
|
||||||
|
val thm1 = DISCH ``A \holAnd{} B`` thm1b
|
||||||
|
|
||||||
|
\mlcomment{Show |- B \holAnd{} A ==> A \holAnd{} B}
|
||||||
|
val thm2a = ASSUME ``B \holAnd{} A``;
|
||||||
|
val thm2b = CONJ (CONJUNCT2 thm2a) (CONJUNCT1 thm2a);
|
||||||
|
val thm2 = DISCH ``B \holAnd{} A`` thm2b
|
||||||
|
|
||||||
|
\mlcomment{Combine to get |- A \holAnd{} B <=> B \holAnd{} A}
|
||||||
|
val thm3 = IMP_ANTISYM_RULE thm1 thm2
|
||||||
|
|
||||||
|
\mlcomment{Add quantifiers}
|
||||||
|
val thm4 = GENL [``A:bool``, ``B:bool``] thm3
|
||||||
|
\end{semiverbatim}
|
||||||
|
\bigskip
|
||||||
|
\item this is how you write down a proof
|
||||||
|
\item for finding a proof it is however often useful to think \emph{backwards}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Motivation II - thinking backwards}
|
||||||
|
\begin{itemize}
|
||||||
|
\item we want to prove \begin{itemize}
|
||||||
|
\item \hol{!A B. A \holAnd{} B <=> B \holAnd{} A}
|
||||||
|
\end{itemize}
|
||||||
|
\item all-quantifiers can easily be added later, so let's get rid of them \\
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{A \holAnd{} B <=> B \holAnd{} A}
|
||||||
|
\end{itemize}
|
||||||
|
\item now we have an equivalence, let's show 2 implications \\
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{A \holAnd{} B ==> B \holAnd{} A}
|
||||||
|
\item \hol{B \holAnd{} A ==> A \holAnd{} B}
|
||||||
|
\end{itemize}
|
||||||
|
\item we have an implication, so we can use the precondition as an assumption \\
|
||||||
|
\begin{itemize}
|
||||||
|
\item using \hol{A \holAnd{} B} show \hol{B \holAnd{} A}
|
||||||
|
\item \hol{A \holAnd{} B ==> B \holAnd{} A}
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Motivation III - thinking backwards}
|
||||||
|
\begin{itemize}
|
||||||
|
\item we have a conjunction as assumption, let's split it
|
||||||
|
\begin{itemize}
|
||||||
|
\item using \hol{A} and \hol{B} show \hol{B \holAnd{} A}
|
||||||
|
\item \hol{A \holAnd{} B ==> B \holAnd{} A}
|
||||||
|
\end{itemize}
|
||||||
|
\item we have to show a conjunction, so let's show both parts
|
||||||
|
\begin{itemize}
|
||||||
|
\item using \hol{A} and \hol{B} show \hol{B}
|
||||||
|
\item using \hol{A} and \hol{B} show \hol{A}
|
||||||
|
\item \hol{A \holAnd{} B ==> B \holAnd{} A}
|
||||||
|
\end{itemize}
|
||||||
|
\item the first two proof obligations are trivial
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{A \holAnd{} B ==> B \holAnd{} A}
|
||||||
|
\end{itemize}
|
||||||
|
\item \ldots
|
||||||
|
\item we are done
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Motivation IV}
|
||||||
|
\begin{itemize}
|
||||||
|
\item common practise
|
||||||
|
\begin{itemize}
|
||||||
|
\item think backwards to find proof
|
||||||
|
\item write found proof down in forward style
|
||||||
|
\end{itemize}
|
||||||
|
\item often switch between backward and forward style within a proof\\
|
||||||
|
Example: induction proof
|
||||||
|
\begin{itemize}
|
||||||
|
\item backward step: induct on \ldots
|
||||||
|
\item forward steps: prove base case and induction case
|
||||||
|
\end{itemize}
|
||||||
|
\item whether to use forward or backward proofs depend on
|
||||||
|
\begin{itemize}
|
||||||
|
\item support by the interactive theorem prover you use
|
||||||
|
\begin{itemize}
|
||||||
|
\item HOL~4 and close family: emphasis on backward proof
|
||||||
|
\item Isabelle/HOL: emphasis on forward proof
|
||||||
|
\item Coq : emphasis on backward proof
|
||||||
|
\end{itemize}
|
||||||
|
\item your way of thinking
|
||||||
|
\item the theorem you try to prove
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Implementation of Backward Proofs}
|
||||||
|
\begin{itemize}
|
||||||
|
\item in HOL
|
||||||
|
\begin{itemize}
|
||||||
|
\item proof tactics / backward proofs used for most user-level proofs
|
||||||
|
\item forward proofs used usually for writing automation
|
||||||
|
\end{itemize}
|
||||||
|
\item backward proofs are implemented by \emph{tactics} in HOL
|
||||||
|
\begin{itemize}
|
||||||
|
\item decomposition into subgoals implemented in SML
|
||||||
|
\item SML datastructures used to keep track of all open subgoals
|
||||||
|
\item forward proof used to construct theorems
|
||||||
|
\end{itemize}
|
||||||
|
\item to understand backward proofs in HOL we need to look at
|
||||||
|
\begin{itemize}
|
||||||
|
\item \ml{goal} --- SML datatype for proof obligations
|
||||||
|
\item \ml{goalStack} --- library for keeping track of goals
|
||||||
|
\item \ml{tactic} --- SML type for functions performing backward proofs
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Backward Proofs}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Goals}
|
||||||
|
\begin{itemize}
|
||||||
|
\item goals represent proof obligations, \ie theorems we need/want to prove
|
||||||
|
\item the SML type \ml{goal} is an abbreviation for \ml{term list * term}
|
||||||
|
\item the goal \ml{([asm\_1, ..., asm\_n], c)} records that we need/want to prove the theorem
|
||||||
|
\ml{\{asm\_1, ..., asm\_n\} |- c}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Example Goals}
|
||||||
|
\begin{tabular}{ll}
|
||||||
|
\textbf{Goal} & \textbf{Theorem} \\
|
||||||
|
\ml{([``A``, ``B``], ``A \holAnd{} B``)} & \ml{\{A, B\} |- A \holAnd{} B} \\
|
||||||
|
\ml{([``B``, ``A``], ``A \holAnd{} B``)} & \ml{\{A, B\} |- A \holAnd{} B} \\
|
||||||
|
\ml{([``B \holAnd{} A``], ``A \holAnd{} B``)} & \ml{\{B \holAnd{} A\} |- A \holAnd{} B} \\
|
||||||
|
\ml{([], ``(B \holAnd{} A) ==> (A \holAnd{} B)``)} & \ml{|- (B \holAnd{} A) ==> (A \holAnd{} B)} \\
|
||||||
|
\end{tabular}
|
||||||
|
\end{exampleblock}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Tactics}
|
||||||
|
\begin{itemize}
|
||||||
|
\item the SML type \ml{tactic} is an abbreviation for\\ the type \ml{goal -> goal list * validation}
|
||||||
|
\item \ml{validation} is an abbreviation for \ml{thm list -> thm}
|
||||||
|
\item given a goal, a tactic
|
||||||
|
\begin{itemize}
|
||||||
|
\item decides into which subgoals to decompose the goal
|
||||||
|
\item returns this list of subgoals
|
||||||
|
\item returns a validation that
|
||||||
|
\begin{itemize}
|
||||||
|
\item given a list of theorems for the computed subgoals
|
||||||
|
\item produces a theorem for the original goal
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\item special case: empty list of subgoals
|
||||||
|
\begin{itemize}
|
||||||
|
\item the validation (given \ml{[]}) needs to produce a theorem for the goal
|
||||||
|
\end{itemize}
|
||||||
|
\item notice: a tactic might be invalid
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Example --- \ml{CONJ\_TAC}}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right={CONJ}]{\Gamma \entails p\\\Delta \entails q}{\Gamma \cup \Delta \entails p\ \wedge\ q}\qquad\qquad
|
||||||
|
\inferrule*{\texttt{t} \equiv \texttt{conj1 \holAnd{} conj2}\\\\
|
||||||
|
\texttt{asl} \entails \texttt{conj1}\\\texttt{asl} \entails \texttt{conj2}}
|
||||||
|
{\texttt{asl} \entails \texttt{t}}$
|
||||||
|
\end{center}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\small
|
||||||
|
val CONJ_TAC: tactic = fn (asl, t) =>
|
||||||
|
let
|
||||||
|
val (conj1, conj2) = dest_conj t
|
||||||
|
in
|
||||||
|
([(asl, conj1), (asl, conj2)],
|
||||||
|
fn [th1, th2] => CONJ th1 th2 | _ => raise Match)
|
||||||
|
end
|
||||||
|
handle HOL_ERR _ => raise ERR "CONJ_TAC" ""
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Example --- \ml{EQ\_TAC}}
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*[right=IMP\_ANTISYM\_RULE]{\Gamma \entails p \Longrightarrow q\\\\\Delta \entails q \Longrightarrow p}{\Gamma \cup \Delta \entails p = q}
|
||||||
|
\qquad\qquad
|
||||||
|
\inferrule*{\texttt{t} \equiv \texttt{lhs = rhs}\\\\
|
||||||
|
\texttt{asl} \entails \texttt{lhs ==> rhs}\\\\
|
||||||
|
\texttt{asl} \entails \texttt{rhs ==> lhs}}
|
||||||
|
{\texttt{asl} \entails \texttt{t}}$
|
||||||
|
\end{center}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\small
|
||||||
|
val EQ_TAC: tactic = fn (asl, t) =>
|
||||||
|
let
|
||||||
|
val (lhs, rhs) = dest_eq t
|
||||||
|
in
|
||||||
|
([(asl, mk_imp (lhs, rhs)), (asl, mk_imp (rhs, lhs))],
|
||||||
|
fn [th1, th2] => IMP_ANTISYM_RULE th1 th2
|
||||||
|
| _ => raise Match)
|
||||||
|
end
|
||||||
|
handle HOL_ERR _ => raise ERR "EQ_TAC" ""
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{proofManagerLib / goalStack}
|
||||||
|
\begin{itemize}
|
||||||
|
\item the \ml{proofManagerLib} keeps track of open goals
|
||||||
|
\item it uses \ml{goalStack} internally
|
||||||
|
\item important commands
|
||||||
|
\begin{itemize}
|
||||||
|
\item \emph{g} --- set up new goal
|
||||||
|
\item \emph{e} --- expand a tactic
|
||||||
|
\item \emph{p} --- print the current status
|
||||||
|
\item \emph{top\_thm} --- get the proved thm at the end
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Proof Example I}
|
||||||
|
|
||||||
|
\begin{block}{Previous Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
-
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{User Action}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
g `!A B. A \holAnd{} B <=> B \holAnd{} A`;
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{New Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
Initial goal:
|
||||||
|
|
||||||
|
!A B. A \holAnd{} B <=> B \holAnd{} A
|
||||||
|
|
||||||
|
: proof
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Proof Example II}
|
||||||
|
|
||||||
|
\begin{block}{Previous Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
Initial goal:
|
||||||
|
|
||||||
|
!A B. A \holAnd{} B <=> B \holAnd{} A
|
||||||
|
|
||||||
|
: proof
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{User Action}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
e GEN_TAC;
|
||||||
|
e GEN_TAC;
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{New Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
A \holAnd{} B <=> B \holAnd{} A
|
||||||
|
|
||||||
|
: proof
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Proof Example III}
|
||||||
|
|
||||||
|
\begin{block}{Previous Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
A \holAnd{} B <=> B \holAnd{} A
|
||||||
|
|
||||||
|
: proof
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{User Action}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
e EQ_TAC;
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{New Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
B \holAnd{} A ==> A \holAnd{} B
|
||||||
|
|
||||||
|
A \holAnd{} B ==> B \holAnd{} A
|
||||||
|
|
||||||
|
: proof
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Proof Example IV}
|
||||||
|
|
||||||
|
\begin{block}{Previous Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
B \holAnd{} A ==> A \holAnd{} B
|
||||||
|
|
||||||
|
A \holAnd{} B ==> B \holAnd{} A : proof
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{User Action}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
e STRIP_TAC;
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{New Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
B \holAnd{} A
|
||||||
|
------------------------------------
|
||||||
|
0. A
|
||||||
|
1. B
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Proof Example V}
|
||||||
|
|
||||||
|
\begin{block}{Previous Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}B \holAnd{} A
|
||||||
|
------------------------------------
|
||||||
|
0. A
|
||||||
|
1. B
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{User Action}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}e CONJ_TAC;
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{New Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}A
|
||||||
|
------------------------------------
|
||||||
|
0. A
|
||||||
|
1. B
|
||||||
|
|
||||||
|
B
|
||||||
|
------------------------------------
|
||||||
|
0. A
|
||||||
|
1. B
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Proof Example VI}
|
||||||
|
|
||||||
|
\begin{block}{Previous Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}A
|
||||||
|
------------------------------------
|
||||||
|
0. A
|
||||||
|
1. B
|
||||||
|
|
||||||
|
B
|
||||||
|
------------------------------------
|
||||||
|
0. A
|
||||||
|
1. B
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{User Action}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}e (ACCEPT_TAC (ASSUME ``B:bool``));
|
||||||
|
e (ACCEPT_TAC (ASSUME ``A:bool``));
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{New Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}B \holAnd{} A ==> A \holAnd{} B
|
||||||
|
|
||||||
|
: proof
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Proof Example VII}
|
||||||
|
|
||||||
|
\begin{block}{Previous Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}B \holAnd{} A ==> A \holAnd{} B
|
||||||
|
|
||||||
|
: proof
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{User Action}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}e STRIP_TAC;
|
||||||
|
e (ASM_REWRITE_TAC[]);
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{New Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}Initial goal proved.
|
||||||
|
|- !A B. A \holAnd{} B <=> B \holAnd{} A:
|
||||||
|
proof
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Proof Example VIII}
|
||||||
|
|
||||||
|
\begin{block}{Previous Goalstack}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}Initial goal proved.
|
||||||
|
|- !A B. A \holAnd{} B <=> B \holAnd{} A:
|
||||||
|
proof
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{User Action}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}val thm = top_thm();
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Result}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}val thm =
|
||||||
|
|- !A B. A \holAnd{} B <=> B \holAnd{} A:
|
||||||
|
thm
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Proof Example IX}
|
||||||
|
|
||||||
|
\begin{block}{Combined Tactic}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}val thm = prove (``!A B. A \holAnd{} B <=> B \holAnd{} A``,
|
||||||
|
GEN_TAC >> GEN_TAC >>
|
||||||
|
EQ_TAC >| [
|
||||||
|
STRIP_TAC >>
|
||||||
|
STRIP_TAC >| [
|
||||||
|
ACCEPT_TAC (ASSUME ``B:bool``),
|
||||||
|
ACCEPT_TAC (ASSUME ``A:bool``)
|
||||||
|
],
|
||||||
|
|
||||||
|
STRIP_TAC >>
|
||||||
|
ASM_REWRITE_TAC[]
|
||||||
|
]);
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Result}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}val thm =
|
||||||
|
|- !A B. A \holAnd{} B <=> B \holAnd{} A:
|
||||||
|
thm
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactic Proof Example X}
|
||||||
|
|
||||||
|
\begin{block}{Cleaned-up Tactic}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}val thm = prove (``!A B. A \holAnd{} B <=> B \holAnd{} A``,
|
||||||
|
REPEAT GEN_TAC >>
|
||||||
|
EQ_TAC >> (
|
||||||
|
REPEAT STRIP_TAC >>
|
||||||
|
ASM_REWRITE_TAC []
|
||||||
|
));
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Result}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\scriptsize{}val thm =
|
||||||
|
|- !A B. A \holAnd{} B <=> B \holAnd{} A:
|
||||||
|
thm
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{General Discussion}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Summary Backward Proofs}
|
||||||
|
\begin{itemize}
|
||||||
|
\item in HOL most user-level proofs are tactic-based
|
||||||
|
\begin{itemize}
|
||||||
|
\item automation often written in forward style
|
||||||
|
\item low-level, basic proofs written in forward style
|
||||||
|
\item nearly everything else is written in backward (tactic) style
|
||||||
|
\end{itemize}
|
||||||
|
\item there are \emph{many} different tactics
|
||||||
|
\item in the lecture only the most basic ones will be discussed
|
||||||
|
\item \alert{you need to learn about tactics on your own}
|
||||||
|
\begin{itemize}
|
||||||
|
\item good starting point: \texttt{Quick} manual
|
||||||
|
\item learning finer points takes a lot of time
|
||||||
|
\item exercises require you to read up on tactics
|
||||||
|
\end{itemize}
|
||||||
|
\item often there are many ways to prove a statement, which tactics to use depends on
|
||||||
|
\begin{itemize}
|
||||||
|
\item personal way of thinking
|
||||||
|
\item personal style and preferences
|
||||||
|
\item maintainability, clarity, elegance, robustness
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
675
lectures/08_basic_tactics.tex
Normal file
675
lectures/08_basic_tactics.tex
Normal file
@ -0,0 +1,675 @@
|
|||||||
|
\part{Basic Tactics}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Syntax of Tactics in HOL}
|
||||||
|
\begin{itemize}
|
||||||
|
\item originally tactics were written all in capital letters with underscores\\
|
||||||
|
Example: \hol{ALL\_TAC}
|
||||||
|
\item since 2010 more and more tactics have overloaded lower-case syntax\\
|
||||||
|
Example: \hol{all\_tac}
|
||||||
|
\item sometimes, the lower-case version is shortened\\
|
||||||
|
Example: \hol{REPEAT}, \hol{rpt}
|
||||||
|
\item sometimes, there is special syntax\\
|
||||||
|
Example: \hol{THEN}, \hol{\textbsl{}\textbsl{}}, \hol{>>}
|
||||||
|
\item which one to use is mostly a matter of personal taste
|
||||||
|
\begin{itemize}
|
||||||
|
\item all-capital names are hard to read and type
|
||||||
|
\item however, not for all tactics there are lower-case versions
|
||||||
|
\item mixed lower- and upper-case tactics are even harder to read
|
||||||
|
\item often shortened lower-case name is not \textit{speaking}
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\bottomstatement{In the lecture we will use mostly the old-style names.}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Basic Tactics}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Some Basic Tactics}
|
||||||
|
|
||||||
|
\begin{tabular}{ll}
|
||||||
|
\hol{GEN\_TAC} & remove outermost all-quantifier \\
|
||||||
|
\hol{DISCH\_TAC} & move antecedent of goal into assumptions \\
|
||||||
|
\hol{CONJ\_TAC} & splits conjunctive goal \\
|
||||||
|
\hol{STRIP\_TAC} & splits on outermost connective (combination\\
|
||||||
|
& \quad of \hol{GEN\_TAC}, \hol{CONJ\_TAC}, \hol{DISCH\_TAC}, \ldots) \\
|
||||||
|
\hol{DISJ1\_TAC} & selects left disjunct \\
|
||||||
|
\hol{DISJ2\_TAC} & selects right disjunct \\
|
||||||
|
\hol{EQ\_TAC} & reduce Boolean equality to implications \\
|
||||||
|
\hol{ASSUME\_TAC}\ thm & add theorem to list of assumptions \\
|
||||||
|
\hol{EXISTS\_TAC} term & provide witness for existential goal \\
|
||||||
|
\end{tabular}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Tacticals}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item tacticals are SML functions that combine tactics to form new tactics
|
||||||
|
\item common workflow
|
||||||
|
\begin{itemize}
|
||||||
|
\item develop large tactic interactively
|
||||||
|
\item using \hol{goalStack} and editor support to execute tactics one by one
|
||||||
|
\item combine tactics manually with tacticals to create larger tactics
|
||||||
|
\item finally end up with one large tactic that solves your goal
|
||||||
|
\item use \hol{prove} or \hol{store\_thm} instead of \hol{goalStack}
|
||||||
|
\end{itemize}
|
||||||
|
\item make sure to \alert{clearly mark proof structure} by \eg
|
||||||
|
\begin{itemize}
|
||||||
|
\item use indentation
|
||||||
|
\item use parentheses
|
||||||
|
\item use appropriate connectives
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\item goalStack commands like \hol{e} or \hol{g} should not appear in your final proof
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Some Basic Tacticals}
|
||||||
|
|
||||||
|
\begin{tabular}{lll}
|
||||||
|
tac1 \hol{>>} tac2 & \hol{THEN}, \hol{\textbsl{}\textbsl{}} & applies tactics in sequence \\
|
||||||
|
tac \hol{>|} tacL & \hol{THENL} & applies list of tactics to subgoals \\
|
||||||
|
tac1 \hol{>-} tac2 & \hol{THEN1} & applies tac2 to the first subgoal of tac1 \\
|
||||||
|
\hol{REPEAT} tac & \hol{rpt} & repeats tac until it fails \\
|
||||||
|
\hol{NTAC} n tac & & apply tac n times \\
|
||||||
|
\hol{REVERSE} tac & \hol{reverse} & reverses the order of subgoals \\
|
||||||
|
tac1 \hol{ORELSE} tac2 & & applies tac1 only if tac2 fails \\
|
||||||
|
\hol{TRY} tac & & do nothing if tac fails \\
|
||||||
|
\hol{ALL\_TAC} & \hol{all\_tac} & do nothing \\
|
||||||
|
\hol{NO\_TAC} & & fail
|
||||||
|
\end{tabular}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Basic Rewrite Tactics}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item (equational) rewriting is at the core of HOL's automation
|
||||||
|
\item we will discuss it in detail later
|
||||||
|
\item details complex, but basic usage is straightforward
|
||||||
|
\begin{itemize}
|
||||||
|
\item given a theorem \hol{rewr\_thm} of form \hol{|- P\ x = Q\ x} and a term \hol{t}
|
||||||
|
\item rewriting \hol{t} with \hol{rewr\_thm} means
|
||||||
|
\item replacing each occurrence of a term \hol{P c} for some \hol{c} with \hol{Q c} in \hol{t}
|
||||||
|
\end{itemize}
|
||||||
|
\item \alert{warning:} rewriting may loop\\Example: rewriting with theorem \hol{|- X <=> (X \holAnd{} T)}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{tabular}{ll}
|
||||||
|
\hol{REWRITE\_TAC} thms & rewrite goal using equations found\\
|
||||||
|
& in given list of theorems \\
|
||||||
|
\hol{ASM\_REWRITE\_TAC} thms & in addition use assumptions \\
|
||||||
|
\hol{ONCE\_REWRITE\_TAC} thms & rewrite once in goal using equations\\
|
||||||
|
\hol{ONCE\_ASM\_REWRITE\_TAC} thms & rewrite once using assumptions
|
||||||
|
\end{tabular}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Case-Split and Induction Tactics}
|
||||||
|
|
||||||
|
\begin{tabular}{ll}
|
||||||
|
\hol{Induct\_on} `term` & induct on \texttt{term} \\
|
||||||
|
\hol{Induct} & induct on all-quantifier \\
|
||||||
|
\hol{Cases\_on} `term` & case-split on \texttt{term} \\
|
||||||
|
\hol{Cases} & case-split on all-quantifier \\
|
||||||
|
\hol{MATCH\_MP\_TAC} thm & apply rule \\
|
||||||
|
\hol{IRULE\_TAC} thm & generalised apply rule
|
||||||
|
\end{tabular}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Assumption Tactics}
|
||||||
|
|
||||||
|
\begin{tabular}{ll}
|
||||||
|
\hol{POP\_ASSUM} thm-tac & use and remove first assumption \\
|
||||||
|
& \-\quad common usage \hol{POP\_ASSUM MP\_TAC} \\[1em]
|
||||||
|
\hol{PAT\_ASSUM} term thm-tac& use (and remove) first \\
|
||||||
|
\-\quad also \hol{PAT\_X\_ASSUM} term thm-tac& \quad assumption matching pattern \\[1em]
|
||||||
|
\hol{WEAKEN\_TAC} term-pred & removes first assumption \\
|
||||||
|
& \quad{}satisfying predicate
|
||||||
|
\end{tabular}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Decision Procedure Tactics}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item decision procedures try to solve the current goal completely
|
||||||
|
\item they either succeed or fail
|
||||||
|
\item no partial progress
|
||||||
|
\item decision procedures vital for automation
|
||||||
|
\end{itemize}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\begin{tabular}{ll}
|
||||||
|
\hol{TAUT\_TAC} & propositional logic tautology checker \\
|
||||||
|
\hol{DECIDE\_TAC} & linear arithmetic for \texttt{num} \\
|
||||||
|
\hol{METIS\_TAC} thms & first order prover \\
|
||||||
|
\hol{numLib.ARITH\_TAC} & Presburger arithmetic \\
|
||||||
|
\hol{intLib.ARITH\_TAC} & uses Omega test
|
||||||
|
\end{tabular}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Subgoal Tactics}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item it is vital to structure your proofs well
|
||||||
|
\begin{itemize}
|
||||||
|
\item improved maintainability
|
||||||
|
\item improved readability
|
||||||
|
\item improved reusability
|
||||||
|
\item saves time in medium-run
|
||||||
|
\end{itemize}
|
||||||
|
\item therefore, use many small lemmata
|
||||||
|
\item also, use many explicit subgoals
|
||||||
|
\end{itemize}
|
||||||
|
\bigskip
|
||||||
|
\begin{tabular}{ll}
|
||||||
|
`term-frag` \hol{by} tac & show term with tac and\\
|
||||||
|
& add it to assumptions \\
|
||||||
|
`term-frag` \hol{suffices\_by} tac & show it suffices to prove term
|
||||||
|
\end{tabular}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Term Fragments / Term Quotations}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item notice that \hol{by} and \hol{suffices\_by} take \emph{term fragments}
|
||||||
|
\item term fragments are also called \emph{term quotations}
|
||||||
|
\item they represent (partially) unparsed terms
|
||||||
|
\item parsing takes place during execution of tactic in context of goal
|
||||||
|
\item this helps to avoid type annotations
|
||||||
|
\item however, this means syntax errors show late as well
|
||||||
|
\item the library \emph{Q} defines many tactics using term fragments
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Examples}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Importance of Exercises}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item here many tactics are presented in a very short amount of time
|
||||||
|
\item there are many, many more important tactics out there
|
||||||
|
\item few people can learn a programming language just by reading manuals
|
||||||
|
\item similar few people can learn HOL just by reading and listening
|
||||||
|
\item you should write your own proofs and play around with these tactics
|
||||||
|
\item solving the exercises is highly recommended\\(and actually required if you want credits for this course)
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example I - Slide 1}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item we want to prove \hol{!l.\ LENGTH (APPEND l l) = 2 * LENGTH l}
|
||||||
|
\item first step: set up goal on \hol{goalStack}
|
||||||
|
\item at same time start writing proof script
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val LENGTH_APPEND_SAME = prove (
|
||||||
|
``!l. LENGTH (APPEND l l) = 2 * LENGTH l``,
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Actions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item run \texttt{g ``!l.\ LENGTH (APPEND l l) = 2 * LENGTH l``}
|
||||||
|
\item this is done by hol-mode
|
||||||
|
\item move cursor inside term and press \texttt{M-h g}\\
|
||||||
|
(menu-entry \texttt{HOL - Goalstack - New goal})
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example I - Slide 2}
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
!l. LENGTH (l ++ l) = 2 * LENGTH l
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item the outermost connective is an all-quantifier
|
||||||
|
\item let's get rid of it via \hol{GEN\_TAC}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val LENGTH_APPEND_SAME = prove (
|
||||||
|
``!l. LENGTH (l ++ l) = 2 * LENGTH l``,
|
||||||
|
GEN_TAC
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Actions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item run \texttt{e GEN\_TAC}
|
||||||
|
\item this is done by hol-mode
|
||||||
|
\item mark line with \texttt{GEN\_TAC} and press \texttt{M-h e}\\
|
||||||
|
(menu-entry \texttt{HOL - Goalstack - Apply tactic})
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example I - Slide 3}
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
LENGTH (l ++ l) = 2 * LENGTH l
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{LENGTH} of \hol{APPEND} can be simplified
|
||||||
|
\item let's search an appropriate lemma with \ml{DB.match}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Actions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item run \ml{DB.print\_match [] ``LENGTH (\_ ++ \_)``}
|
||||||
|
\item this is done via hol-mode
|
||||||
|
\item press \texttt{M-h m} and enter term pattern\\
|
||||||
|
(menu-entry \texttt{HOL - Misc - DB match})
|
||||||
|
\item this finds the theorem \ml{listTheory.LENGTH\_APPEND}\\
|
||||||
|
\hol{|- !l1 l2. LENGTH (l1 ++ l2) = LENGTH l1 + LENGTH l2}
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example I - Slide 4}
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
LENGTH (l ++ l) = 2 * LENGTH l
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item let's rewrite with found theorem \ml{listTheory.LENGTH\_APPEND}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val LENGTH_APPEND_SAME = prove (
|
||||||
|
``!l. LENGTH (APPEND l l) = 2 * LENGTH l``,
|
||||||
|
GEN_TAC >>
|
||||||
|
REWRITE_TAC[listTheory.LENGTH\_APPEND]
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Actions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item connect the new tactic with tactical \hol{>>} (\hol{THEN})
|
||||||
|
\item use hol-mode to expand the new tactic
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example I - Slide 5}
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
LENGTH l + LENGTH l = 2 * LENGTH l
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item let's search a theorem for simplifying \hol{2 * LENGTH l}
|
||||||
|
\item prepare for extending the previous rewrite tactic
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val LENGTH_APPEND_SAME = prove (
|
||||||
|
``!l. LENGTH (APPEND l l) = 2 * LENGTH l``,
|
||||||
|
GEN_TAC >>
|
||||||
|
REWRITE_TAC[listTheory.LENGTH\_APPEND]
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Actions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{DB.match} finds theorem \hol{arithmeticTheory.TIMES2}
|
||||||
|
\item press \texttt{M-h b} and undo last tactic expansion\\
|
||||||
|
(menu-entry \texttt{HOL - Goalstack - Back up})
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example I - Slide 6}
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
LENGTH (l ++ l) = 2 * LENGTH l
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item extend the previous rewrite tactic
|
||||||
|
\item finish proof
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val LENGTH_APPEND_SAME = prove (
|
||||||
|
``!l. LENGTH (APPEND l l) = 2 * LENGTH l``,
|
||||||
|
GEN_TAC >>
|
||||||
|
REWRITE_TAC[listTheory.LENGTH\_APPEND, arithmeticTheory.TIMES2]);
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Actions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item add \hol{TIMES2} to the list of theorems used by rewrite tactic
|
||||||
|
\item use hol-mode to expand the extended rewrite tactic
|
||||||
|
\item goal is solved, so let's add closing parenthesis and semicolon
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example I - Slide 7}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item we have a finished tactic proving our goal
|
||||||
|
\item notice that \hol{GEN\_TAC} is not needed
|
||||||
|
\item let's polish the proof script
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val LENGTH_APPEND_SAME = prove (
|
||||||
|
``!l. LENGTH (APPEND l l) = 2 * LENGTH l``,
|
||||||
|
GEN_TAC >>
|
||||||
|
REWRITE_TAC[listTheory.LENGTH\_APPEND, arithmeticTheory.TIMES2]);
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Polished Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val LENGTH_APPEND_SAME = prove (
|
||||||
|
``!l. LENGTH (APPEND l l) = 2 * LENGTH l``,
|
||||||
|
REWRITE_TAC[listTheory.LENGTH\_APPEND, arithmeticTheory.TIMES2]);
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example II - Slide 1}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item let's prove something slightly more complicated
|
||||||
|
\item drop old goal by pressing \texttt{M-h d}\\
|
||||||
|
(menu-entry \texttt{HOL - Goalstack - Drop goal})
|
||||||
|
\item set up goal on \hol{goalStack} (\texttt{M-h g})
|
||||||
|
\item at same time start writing proof script
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val NOT_ALL_DISTINCT_LEMMA = prove (``!x1 x2 x3 l1 l2 l3.
|
||||||
|
(MEM x1 l1 \holAnd{} MEM x2 l2 \holAnd{} MEM x3 l3) \holAnd{}
|
||||||
|
((x1 <= x2) \holAnd{} (x2 <= x3) \holAnd{} x3 <= SUC x1) ==>
|
||||||
|
~(ALL_DISTINCT (l1 ++ l2 ++ l3))``,
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example II - Slide 2}
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
!x1 x2 x3 l1 l2 l3.
|
||||||
|
(MEM x1 l1 \holAnd{} MEM x2 l2 \holAnd{} MEM x3 l3) \holAnd{}
|
||||||
|
x1 <= x2 \holAnd{} x2 <= x3 \holAnd{} x3 <= SUC x1 ==>
|
||||||
|
~ALL_DISTINCT (l1 ++ l2 ++ l3)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item let's strip the goal
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val NOT_ALL_DISTINCT_LEMMA = prove (``!x1 x2 x3 l1 l2 l3.
|
||||||
|
(MEM x1 l1 \holAnd{} MEM x2 l2 \holAnd{} MEM x3 l3) \holAnd{}
|
||||||
|
((x1 <= x2) \holAnd{} (x2 <= x3) \holAnd{} x3 <= SUC x1) ==>
|
||||||
|
~(ALL_DISTINCT (l1 ++ l2 ++ l3))``,
|
||||||
|
REPEAT STRIP\_TAC
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example II - Slide 2}
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
!x1 x2 x3 l1 l2 l3.
|
||||||
|
(MEM x1 l1 \holAnd{} MEM x2 l2 \holAnd{} MEM x3 l3) \holAnd{}
|
||||||
|
x1 <= x2 \holAnd{} x2 <= x3 \holAnd{} x3 <= SUC x1 ==>
|
||||||
|
~ALL_DISTINCT (l1 ++ l2 ++ l3)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item let's strip the goal
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val LENGTH_APPEND_SAME = prove (
|
||||||
|
``!l. LENGTH (APPEND l l) = 2 * LENGTH l``,
|
||||||
|
REPEAT STRIP\_TAC
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Actions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item add \hol{REPEAT STRIP\_TAC} to proof script
|
||||||
|
\item expand this tactic using hol-mode
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example II - Slide 3}
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
F
|
||||||
|
------------------------------------
|
||||||
|
0. MEM x1 l1 4. x2 <= x3
|
||||||
|
1. MEM x2 l2 5. x3 <= SUC x1
|
||||||
|
2. MEM x3 l3 6. ALL_DISTINCT (l1 ++ l2 ++ l3)
|
||||||
|
3. x1 <= x2
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item oops, we did too much, we would like to keep \texttt{ALL\_DISTINCT} in goal
|
||||||
|
\end{itemize}
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val NOT_ALL_DISTINCT_LEMMA = prove (``...``,
|
||||||
|
REPEAT GEN\_TAC >> STRIP\_TAC
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Actions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item undo \hol{REPEAT STRIP\_TAC} (\texttt{M-h b})
|
||||||
|
\item expand more fine-tuned strip tactic
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example II - Slide 4}
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
~ALL_DISTINCT (l1 ++ l2 ++ l3)
|
||||||
|
------------------------------------
|
||||||
|
0. MEM x1 l1 3. x1 <= x2
|
||||||
|
1. MEM x2 l2 4. x2 <= x3
|
||||||
|
2. MEM x3 l3 5. x3 <= SUC x1
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item now let's simplify \hol{ALL\_DISTINCT}
|
||||||
|
\item search suitable theorems with \hol{DB.match}
|
||||||
|
\item use them with rewrite tactic
|
||||||
|
\end{itemize}
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val NOT_ALL_DISTINCT_LEMMA = prove (``...``,
|
||||||
|
REPEAT GEN\_TAC >> STRIP\_TAC >>
|
||||||
|
REWRITE\_TAC[listTheory.ALL_DISTINCT\_APPEND, listTheory.MEM\_APPEND]
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example II - Slide 5}
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
~((ALL_DISTINCT l1 \holAnd{} ALL_DISTINCT l2 \holAnd{} !e. MEM e l1 ==> ~MEM e l2) \holAnd{}
|
||||||
|
ALL_DISTINCT l3 \holAnd{} !e. MEM e l1 \holOr{} MEM e l2 ==> ~MEM e l3)
|
||||||
|
------------------------------------
|
||||||
|
0. MEM x1 l1 3. x1 <= x2
|
||||||
|
1. MEM x2 l2 4. x2 <= x3
|
||||||
|
2. MEM x3 l3 5. x3 <= SUC x1
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item from assumptions 3, 4 and 5 we know \hol{x2 = x1 \holOr{} x2 = x3}
|
||||||
|
\item let's deduce this fact by \hol{DECIDE\_TAC}
|
||||||
|
\end{itemize}
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val NOT_ALL_DISTINCT_LEMMA = prove (``...``,
|
||||||
|
REPEAT GEN\_TAC >> STRIP\_TAC >>
|
||||||
|
REWRITE\_TAC[listTheory.ALL_DISTINCT\_APPEND, listTheory.MEM\_APPEND] >>
|
||||||
|
`(x2 = x1) \holOr{} (x2 = x3)` by DECIDE_TAC
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example II - Slide 6}
|
||||||
|
|
||||||
|
\begin{block}{Current Goals --- 2 subgoals, one for each disjunct}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
~((ALL_DISTINCT l1 \holAnd{} ALL_DISTINCT l2 \holAnd{} !e. MEM e l1 ==> ~MEM e l2) \holAnd{}
|
||||||
|
ALL_DISTINCT l3 \holAnd{} !e. MEM e l1 \holOr{} MEM e l2 ==> ~MEM e l3)
|
||||||
|
------------------------------------
|
||||||
|
0. MEM x1 l1 4. x2 <= x3
|
||||||
|
1. MEM x2 l2 5. x3 <= SUC x1
|
||||||
|
2. MEM x3 l3 6a. x2 = x1
|
||||||
|
3. x1 <= x2 6b. x2 = x3
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item both goals are easily solved by first-order reasoning
|
||||||
|
\item let's use \hol{METIS\_TAC[]} for both subgoals
|
||||||
|
\end{itemize}
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val NOT_ALL_DISTINCT_LEMMA = prove (``...``,
|
||||||
|
REPEAT GEN\_TAC >> STRIP\_TAC >>
|
||||||
|
REWRITE\_TAC[listTheory.ALL_DISTINCT\_APPEND, listTheory.MEM\_APPEND] >>
|
||||||
|
`(x2 = x1) \holOr{} (x2 = x3)` by DECIDE_TAC >> (
|
||||||
|
METIS\_TAC[]
|
||||||
|
));
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Tactical Proof - Example II - Slide 7}
|
||||||
|
|
||||||
|
\begin{block}{Finished Proof Script}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val NOT_ALL_DISTINCT_LEMMA = prove (
|
||||||
|
``!x1 x2 x3 l1 l2 l3.
|
||||||
|
(MEM x1 l1 \holAnd{} MEM x2 l2 \holAnd{} MEM x3 l3) \holAnd{}
|
||||||
|
((x1 <= x2) \holAnd{} (x2 <= x3) \holAnd{} x3 <= SUC x1) ==>
|
||||||
|
~(ALL_DISTINCT (l1 ++ l2 ++ l3))``,
|
||||||
|
REPEAT GEN\_TAC >> STRIP\_TAC >>
|
||||||
|
REWRITE\_TAC[listTheory.ALL_DISTINCT\_APPEND, listTheory.MEM\_APPEND] >>
|
||||||
|
`(x2 = x1) \holOr{} (x2 = x3)` by DECIDE_TAC >> (
|
||||||
|
METIS\_TAC[]
|
||||||
|
));
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\begin{itemize}
|
||||||
|
\item notice that proof structure is explicit
|
||||||
|
\item parentheses and indentation used to mark new subgoals
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
174
lectures/09_induction.tex
Normal file
174
lectures/09_induction.tex
Normal file
@ -0,0 +1,174 @@
|
|||||||
|
\part{Induction Proofs}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Mathematical Induction}
|
||||||
|
\begin{itemize}
|
||||||
|
\item mathematical (\aka natural) induction principle:\\
|
||||||
|
If a property $P$ holds for 0 and $P(n)$ implies $P(n+1)$ for all n,\\
|
||||||
|
then $P(n)$ holds for all $n$.
|
||||||
|
\item HOL is expressive enough to encode this principle as a theorem.\\\medskip
|
||||||
|
\hol{|- !P. P 0 \holAnd{} (!n.\ P n ==> P (SUC n)) ==> !n.\ P n}\medskip
|
||||||
|
\item Performing mathematical induction in HOL means applying this theorem (\eg via \hol{HO\_MATCH\_MP\_TAC})
|
||||||
|
|
||||||
|
\item there are many similarish induction theorems in HOL
|
||||||
|
\item Example: complete induction principle\\\medskip
|
||||||
|
\hol{|- !P. (!n.\ (!m.\ m < n ==> P m) ==> P n) ==> !n.\ P n}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Structural Induction Theorems}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \emph{structural induction} theorems are an important special form of induction theorems
|
||||||
|
\item they describe performing induction on the structure of a datatype
|
||||||
|
\item Example: \hol{\scriptsize|- !P.\ P [] \holAnd{} (!t.\ P t ==> !h.\ P (h::t)) ==> !l.\ P l}
|
||||||
|
\item structural induction is used very frequently in HOL
|
||||||
|
\item for each algabraic datatype, there is an induction theorem
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Other Induction Theorems}
|
||||||
|
\begin{itemize}
|
||||||
|
\item there are many induction theorems in HOL
|
||||||
|
\begin{itemize}
|
||||||
|
\item datatype definitions lead to induction theorems
|
||||||
|
\item recursive function definitions produce corresponding induction theorems
|
||||||
|
\item recursive relation definitions give rise to induction theorems
|
||||||
|
\item many are manually defined
|
||||||
|
\end{itemize}
|
||||||
|
\item Examples\\\bigskip{\scriptsize
|
||||||
|
\hol{|- !P.\ P [] \holAnd{} (!l.\ P l ==> !x.\ P (SNOC x l)) ==> !l.\ P l}\\\bigskip
|
||||||
|
\hol{|- !P.\ P FEMPTY \holAnd{} \\
|
||||||
|
\-\qquad\quad(!f.\ P f ==> !x y.\ x NOTIN FDOM f ==> P (f |+ (x,y))) ==> !f.\ P f}\\\bigskip
|
||||||
|
\hol{|- !P.\ P \{\} \holAnd{} \\
|
||||||
|
\-\qquad\quad(!s.\ FINITE s \holAnd{} P s ==> !e.\ e NOTIN s ==> P (e INSERT s)) ==> \\
|
||||||
|
\-\qquad\quad!s.\ FINITE s ==> P s}\\\bigskip
|
||||||
|
\hol{|- !R P.\ (!x y.\ R x y ==> P x y) \holAnd{} (!x y z.\ P x y \holAnd{} P y z ==> P x z) ==>\\
|
||||||
|
\-\qquad\quad!u v.\ R$^+$ u v ==> P u v}}
|
||||||
|
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Induction (and Case-Split) Tactics}
|
||||||
|
\begin{itemize}
|
||||||
|
\item the tactic \hol{Induct} (or \hol{Induct\_on}) is usually used to start induction proofs
|
||||||
|
\item it looks at the type of the quantifier (or its argument) and applies the default induction theorem for this type
|
||||||
|
\item this is usually what one needs
|
||||||
|
\item other (non default) induction theorems can be applied via \hol{INDUCT\_THEN} or \hol{HO\_MATCH\_MP\_TAC}
|
||||||
|
\item similarish \hol{Cases\_on} picks and applies default case-split theorems
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Induction Proof - Example I - Slide 1}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item let's prove via induction\\
|
||||||
|
\hol{!l1 l2.\ REVERSE (l1 ++ l2) = REVERSE l2 ++ REVERSE l1}
|
||||||
|
\item we set up the goal and start an induction proof on \hol{l1}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val REVERSE_APPEND = prove (
|
||||||
|
``!l1 l2.\ REVERSE (l1 ++ l2) = REVERSE l2 ++ REVERSE l1``,
|
||||||
|
Induct
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Induction Proof - Example I - Slide 2}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item the induction tactic produced two cases
|
||||||
|
\item base case:\\
|
||||||
|
{\scriptsize\hol{!l2.\ REVERSE ([] ++ l2) = REVERSE l2 ++ REVERSE []}}
|
||||||
|
\item induction step:\\
|
||||||
|
{\scriptsize
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\hol{!h l2.\ REVERSE (h::l1 ++ l2) = REVERSE l2 ++ REVERSE (h::l1)}
|
||||||
|
-----------------------------------------------------------
|
||||||
|
\hol{!l2.\ REVERSE (l1 ++ l2) = REVERSE l2 ++ REVERSE l1}
|
||||||
|
\end{semiverbatim}}
|
||||||
|
|
||||||
|
\item both goals can be easily proved by rewriting
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val REVERSE_APPEND = prove (``
|
||||||
|
!l1 l2.\ REVERSE (l1 ++ l2) = REVERSE l2 ++ REVERSE l1``,
|
||||||
|
Induct >| [
|
||||||
|
REWRITE_TAC[REVERSE_DEF, APPEND, APPEND_NIL],
|
||||||
|
ASM_REWRITE_TAC[REVERSE_DEF, APPEND, APPEND_ASSOC]
|
||||||
|
]);
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Induction Proof - Example II - Slide 2}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item let's prove via induction\\
|
||||||
|
\hol{!l.\ REVERSE (REVERSE l) = l}
|
||||||
|
\item we set up the goal and start an induction proof on \hol{l}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\small
|
||||||
|
val REVERSE_REVERSE = prove (
|
||||||
|
``!l.\ REVERSE (REVERSE l) = l``,
|
||||||
|
Induct
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Induction Proof - Example II - Slide 2}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item the induction tactic produced two cases
|
||||||
|
\item base case:\\
|
||||||
|
{\scriptsize\hol{REVERSE (REVERSE []) = []}}
|
||||||
|
\item induction step:\\
|
||||||
|
{\scriptsize
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\hol{!h.\ REVERSE (REVERSE (h::l1)) = h::l1}
|
||||||
|
--------------------------------------------
|
||||||
|
\hol{REVERSE (REVERSE l) = l}
|
||||||
|
\end{semiverbatim}}
|
||||||
|
|
||||||
|
\item again both goals can be easily proved by rewriting
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Proof Script}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val REVERSE_REVERSE = prove (
|
||||||
|
``!l.\ REVERSE (REVERSE l) = l``,
|
||||||
|
Induct >| [
|
||||||
|
REWRITE_TAC[REVERSE_DEF],
|
||||||
|
ASM_REWRITE_TAC[REVERSE_DEF, REVERSE_APPEND, APPEND]
|
||||||
|
]);
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
987
lectures/10_definitions.tex
Normal file
987
lectures/10_definitions.tex
Normal file
@ -0,0 +1,987 @@
|
|||||||
|
\part{Basic Definitions}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\section{Definitions, Axioms and Oracles}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Definitional Extensions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item there are \emph{conservative definition principles} for types and constants
|
||||||
|
\item conservative means that all theorems that can be proved in extended theory can also be proved in original one
|
||||||
|
\item however, such extensions make the theory more comfortable
|
||||||
|
\item definitions introduce \alert{no new inconsistencies}
|
||||||
|
\item the HOL community has a very strong tradition of a purely definitional approach
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Axiomatic Extensions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \alert{axioms} are a different approach
|
||||||
|
\item they allow postulating arbitrary properties, \ie extending the logic with arbitrary theorems
|
||||||
|
\item this approach might introduce new inconsistencies
|
||||||
|
\item in HOL axioms are very rarely needed
|
||||||
|
\item using definitions is often considered more elegant
|
||||||
|
\item it is hard to keep track of axioms
|
||||||
|
\item use axioms only if you really know what you are doing
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Oracles}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \alert{oracles} are families of axioms
|
||||||
|
\item however, they are used differently than axioms
|
||||||
|
\item they are used to enable usage of external tools and knowledge
|
||||||
|
\item you might want to use an external automated prover
|
||||||
|
\item this external tool acts as an oracle
|
||||||
|
\begin{itemize}
|
||||||
|
\item it provides answers
|
||||||
|
\item it does not explain or justify these answers
|
||||||
|
\end{itemize}
|
||||||
|
\item you don't know, whether this external tool might be buggy
|
||||||
|
\item all theorems proved via it are tagged with a special oracle-tag
|
||||||
|
\item tags are propagated
|
||||||
|
\item this allows keeping track of everything depending on the correctness of this tool
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Oracles II}
|
||||||
|
\begin{itemize}
|
||||||
|
\item Common oracle-tags
|
||||||
|
\begin{itemize}
|
||||||
|
\item \ml{DISK\_THM} --- theorem was written to disk and read again
|
||||||
|
\item \ml{HolSatLib} --- proved by MiniSat
|
||||||
|
\item \ml{HolSmtLib} --- proved by external SMT solver
|
||||||
|
\item \ml{fast\_proof} --- proof was skipped to compile a theory rapidly
|
||||||
|
\item \ml{cheat} --- we cheated :-)
|
||||||
|
\end{itemize}
|
||||||
|
\item \alert{cheating} via \eg the \hol{cheat} tactic means skipping proofs
|
||||||
|
\item it can be helpful during proof development
|
||||||
|
\begin{itemize}
|
||||||
|
\item test whether some lemmata allow you finishing the proof
|
||||||
|
\item skip lengthy but boring cases and focus on critical parts first
|
||||||
|
\item experiment with exact form of invariants
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\item cheats should be removed reasonable quickly
|
||||||
|
\item HOL warns about cheats and skipped proofs
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Pitfalls of Definitional Approach}
|
||||||
|
\begin{itemize}
|
||||||
|
\item definitions can't introduce new inconsistencies
|
||||||
|
\item they force you to state all assumed properties at one location
|
||||||
|
\item however, you still need to be careful
|
||||||
|
\item Is your definition really expressing what you had in mind ?
|
||||||
|
\item Does your formalisation correspond to the real world artefact ?
|
||||||
|
\item How can you convince others that this is the case ?
|
||||||
|
\item we will discuss methods to deal with this later in this course
|
||||||
|
\begin{itemize}
|
||||||
|
\item formal sanity
|
||||||
|
\item conformance testing
|
||||||
|
\item code review
|
||||||
|
\item comments, good names, clear coding style
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\item this is highly complex and needs a lot of effort in general
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\section{Primitive Definition Principles}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Specifications}
|
||||||
|
\begin{itemize}
|
||||||
|
\item HOL allows to introduce new constants with certain properties, provided the
|
||||||
|
existence of such constants has been shown
|
||||||
|
\begin{exampleblock}{Specification of \texttt{EVEN} and \texttt{ODD}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> EVEN\_ODD\_EXISTS
|
||||||
|
val it = |- ?even odd. even 0 \holAnd{} ~odd 0 \holAnd{} (!n. even (SUC n) <=> odd n) \holAnd{}
|
||||||
|
(!n. odd (SUC n) <=> even n)
|
||||||
|
|
||||||
|
> val EO\_SPEC = new\_specification ("EO\_SPEC", ["EVEN", "ODD"], EVEN\_ODD\_EXISTS);
|
||||||
|
val EO\_SPEC = |- EVEN 0 \holAnd{} ~ODD 0 \holAnd{} (!n. EVEN (SUC n) <=> ODD n) \holAnd{}
|
||||||
|
(!n. ODD (SUC n) <=> EVEN n)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\item \hol{new\_specification} is a convenience wrapper
|
||||||
|
\begin{itemize}
|
||||||
|
\item it uses existential quantification instead of Hilbert's choice
|
||||||
|
\item deals with pair syntax
|
||||||
|
\item stores resulting definitions in theory
|
||||||
|
\end{itemize}
|
||||||
|
\item \hol{new\_specification} captures the underlying principle nicely
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Definitions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item special case: new constant defined by equality
|
||||||
|
\begin{exampleblock}{Specification with Equality}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> double_EXISTS
|
||||||
|
val it =
|
||||||
|
|- ?double. (!n. double n = (n + n))
|
||||||
|
|
||||||
|
> val double_def = new_specification ("double_def", ["double"], double_EXISTS);
|
||||||
|
val double_def =
|
||||||
|
|- !n. double n = n + n
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\item there is a specialised methods for such simple definitions
|
||||||
|
\begin{exampleblock}{Non Recursive Definitions}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> val DOUBLE_DEF = new_definition ("DOUBLE_DEF", ``DOUBLE n = n + n``)
|
||||||
|
val DOUBLE_DEF =
|
||||||
|
|- !n. DOUBLE n = n + n
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Restrictions for Definitions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item all variables occurring on right-hand-side (rhs) need to be arguments
|
||||||
|
\begin{itemize}
|
||||||
|
\item \eg \hol{new\_definition (..., ``F n = n + m``)} fails
|
||||||
|
\item \hol{m} is free on rhs
|
||||||
|
\end{itemize}
|
||||||
|
\item all type variables occurring on rhs need to occur on lhs
|
||||||
|
\begin{itemize}
|
||||||
|
\item \eg \hol{new\_definition ("IS\_FIN\_TY", \\
|
||||||
|
\-\qquad\quad``IS\_FIN\_TY = FINITE (UNIV : 'a set)``)} fails
|
||||||
|
\item \hol{IS\_FIN\_TY} would lead to inconsistency
|
||||||
|
\item \hol{|- FINITE (UNIV : bool set)}
|
||||||
|
\item \hol{|- \holNeg{}FINITE (UNIV : num set)}
|
||||||
|
\item \hol {T <=> FINITE (UNIV:bool set) <=> \\ IS\_FIN\_TY <=>\\ FINITE (UNIV:num set) <=> F}
|
||||||
|
\item therefore, such definitions can't be allowed
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Underspecified Functions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item function specification do not need to define the function precisely
|
||||||
|
\item multiple different functions satisfying one spec are possible
|
||||||
|
\item functions resulting from such specs are called \emph{underspecified}
|
||||||
|
\item underspecified functions are still total, one just lacks knowledge
|
||||||
|
\item one common application: modelling \emph{partial functions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item functions like \eg \hol{HD} and \hol{TL} are total
|
||||||
|
\item they are defined for empty lists
|
||||||
|
\item however, it is not specified, which value they have for empty lists
|
||||||
|
\item only known: \hol{HD [] = HD []} and \hol{TL [] = TL []}
|
||||||
|
\begin{minipage}{.7\textwidth}\medskip
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val MY_HD_EXISTS = prove (``?hd. !x xs. (hd (x::xs) = x)``, ...);
|
||||||
|
val MY_HD_SPEC =
|
||||||
|
new_specification ("MY_HD_SPEC", ["MY_HD"], MY_HD_EXISTS)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{minipage}
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Primitive Type Definitions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item HOL allows introducing non-empty subtypes of existing types
|
||||||
|
\item a predicate \hol{P : ty -> bool} describes a subset of an existing type \hol{ty}
|
||||||
|
\item \hol{ty} may contain type variables
|
||||||
|
\item only \emph{non-empty} types are allowed
|
||||||
|
\item therefore a non-emptyness proof \hol{ex-thm} of form \hol{?e.\ P e} is needed
|
||||||
|
\item \hol{new\_type\_definition (op-name, ex-thm)} then introduces a new type \hol{op-name}
|
||||||
|
specified by \hol{P}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Primitive Type Definitions - Example 1}
|
||||||
|
\begin{itemize}
|
||||||
|
\item lets try to define a type \hol{dlist} of lists containing no duplicates
|
||||||
|
\item predicate \hol{ALL\_DISTINCT : 'a list -> bool} is used to define it
|
||||||
|
\item easy to prove theorem \texttt{dlist\_exists}: \hol{|- ?l.\ ALL\_DISTINCT l}
|
||||||
|
\item \hol{val dlist\_TY\_DEF = new\_type\_definitions("dlist", dlist\_exists)} defines
|
||||||
|
a new type \ml{'a dlist} and returns a theorem\bigskip
|
||||||
|
\begin{semiverbatim}
|
||||||
|
|- ?(rep :'a dlist -> 'a list).
|
||||||
|
TYPE_DEFINITION ALL_DISTINCT rep
|
||||||
|
\end{semiverbatim}\bigskip
|
||||||
|
\item \ml{rep} is a function taking a \hol{'a dlist} to the list representing it
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{rep} is injective
|
||||||
|
\item a list satisfies \hol{ALL\_DISTINCT} iff there is a corresponding \hol{dlist}
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Primitive Type Definitions - Example 2}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{define\_new\_type\_bijections} can be used to define bijections between old and new type
|
||||||
|
\medskip
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{define_new_type_bijections \{name="dlist_tybij", ABS="abs_dlist",
|
||||||
|
REP="rep_dlist", tyax=\alert{dlist_TY_DEF}\}}
|
||||||
|
|
||||||
|
val it =
|
||||||
|
|- (!a. abs_dlist (rep_dlist a) = a) \holAnd{}
|
||||||
|
(!r. ALL_DISTINCT r <=> (rep_dlist (abs_dlist r) = r))
|
||||||
|
\end{semiverbatim}\medskip
|
||||||
|
\item other useful theorems can be automatically proved by
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{prove\_abs\_fn\_one\_one}
|
||||||
|
\item \hol{prove\_abs\_fn\_onto}
|
||||||
|
\item \hol{prove\_rep\_fn\_one\_one}
|
||||||
|
\item \hol{prove\_rep\_fn\_onto}
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Primitive Definition Principles Summary}
|
||||||
|
\begin{itemize}
|
||||||
|
\item primitive definition principles are easily explained
|
||||||
|
\item they lead to conservative extensions
|
||||||
|
\item however, they are cumbersome to use
|
||||||
|
\item LCF approach allows implementing more convenient definition tools
|
||||||
|
\begin{itemize}
|
||||||
|
\item \alert{\texttt{Datatype}} package
|
||||||
|
\item \alert{\texttt{TFL}} (Total Functional Language) package
|
||||||
|
\item \ml{IndDef} (Inductive Definition) package
|
||||||
|
\item \ml{quotientLib} Quotient Types Library
|
||||||
|
\item ...
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\section{Functional Programming}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Functional Programming}
|
||||||
|
\begin{itemize}
|
||||||
|
\item the \hol{Datatype} package allows to define datatypes conveniently
|
||||||
|
\item the \hol{TFL} package allows to define (mutually recursive) functions
|
||||||
|
\item the \hol{EVAL} conversion allows evaluating those definitions
|
||||||
|
\item this gives many HOL developments the feeling of a functional program
|
||||||
|
\item there is really a close connection between functional programming and definitions in HOL
|
||||||
|
\begin{itemize}
|
||||||
|
\item functional programming design principles apply
|
||||||
|
\item \hol{EVAL} is a great way to test quickly, whether your definitions are working as intended
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Functional Programming Example}
|
||||||
|
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{Datatype `mylist = E | L 'a mylist`}
|
||||||
|
val it = (): unit
|
||||||
|
|
||||||
|
> \hol{Define `(mylen E = 0) \holAnd{} (mylen (L x xs) = SUC (mylen xs))`}
|
||||||
|
Definition has been stored under "mylen\_def"
|
||||||
|
val it =
|
||||||
|
|- (mylen E = 0) \holAnd{} !x xs. mylen (L x xs) = SUC (mylen xs):
|
||||||
|
thm
|
||||||
|
|
||||||
|
> \hol{EVAL ``mylen (L 2 (L 3 (L 1 E)))``}
|
||||||
|
val it =
|
||||||
|
|- mylen (L 2 (L 3 (L 1 E))) = 3:
|
||||||
|
thm
|
||||||
|
\end{semiverbatim}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Datatype Definitions}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Datatype Package}
|
||||||
|
\begin{itemize}
|
||||||
|
\item the \hol{Datatype} package allows to define SML style datatypes easily
|
||||||
|
\item there is support for
|
||||||
|
\begin{itemize}
|
||||||
|
\item algebraic datatypes
|
||||||
|
\item record types
|
||||||
|
\item mutually recursive types
|
||||||
|
\item ...
|
||||||
|
\end{itemize}
|
||||||
|
\item many constants are automatically introduced
|
||||||
|
\begin{itemize}
|
||||||
|
\item constructors
|
||||||
|
\item case-split constant
|
||||||
|
\item size function
|
||||||
|
\item field-update and accessor functions for records
|
||||||
|
\item ...
|
||||||
|
\end{itemize}
|
||||||
|
\item many theorems are derived and stored in current theory
|
||||||
|
\begin{itemize}
|
||||||
|
\item injectivity and distinctness of constructors
|
||||||
|
\item nchotomy and structural induction theorems
|
||||||
|
\item rewrites for case-split, size and record update functions
|
||||||
|
\item ...
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Datatype Package - Example I}
|
||||||
|
\begin{block}{Tree Datatype in SML}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
datatype ('a,'b) btree = Leaf of 'a
|
||||||
|
| Node of ('a,'b) btree * 'b * ('a,'b) btree
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Tree Datatype in HOL}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
Datatype `btree = Leaf 'a
|
||||||
|
| Node btree 'b btree`
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Tree Datatype in HOL --- Deprecated Syntax}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
Hol_datatype `btree = Leaf of 'a
|
||||||
|
| Node of btree => 'b => btree`
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Datatype Package - Example I - Derived Theorems 1}
|
||||||
|
\begin{block}{\texttt{btree\_distinct}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- !a2 a1 a0 a. Leaf a <> Node a0 a1 a2
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{btree\_11}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- (!a a'. (Leaf a = Leaf a') <=> (a = a')) \holAnd{}
|
||||||
|
(!a0 a1 a2 a0' a1' a2'.
|
||||||
|
(Node a0 a1 a2 = Node a0' a1' a2') <=>
|
||||||
|
(a0 = a0') \holAnd{} (a1 = a1') \holAnd{} (a2 = a2'))
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{btree\_nchotomy}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- !bb. (?a. bb = Leaf a) \holOr{} (?b b1 b0. bb = Node b b1 b0)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{btree\_induction}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- !P. (!a. P (Leaf a)) \holAnd{}
|
||||||
|
(!b b0. P b \holAnd{} P b0 ==> !b1. P (Node b b1 b0)) ==>
|
||||||
|
!b. P b
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Datatype Package - Example I - Derived Theorems 2}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{btree\_size\_def}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- (!f f1 a. btree_size f f1 (Leaf a) = 1 + f a) \holAnd{}
|
||||||
|
(!f f1 a0 a1 a2.
|
||||||
|
btree_size f f1 (Node a0 a1 a2) =
|
||||||
|
1 + (btree_size f f1 a0 + (f1 a1 + btree_size f f1 a2)))
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{btree\_case\_def}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- (!a f f1. btree_CASE (Leaf a) f f1 = f a) \holAnd{}
|
||||||
|
(!a0 a1 a2 f f1. btree_CASE (Node a0 a1 a2) f f1 = f1 a0 a1 a2)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{btree\_case\_cong}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- !M M' f f1.
|
||||||
|
(M = M') \holAnd{} (!a. (M' = Leaf a) ==> (f a = f' a)) \holAnd{}
|
||||||
|
(!a0 a1 a2.
|
||||||
|
(M' = Node a0 a1 a2) ==> (f1 a0 a1 a2 = f1' a0 a1 a2)) ==>
|
||||||
|
(btree_CASE M f f1 = btree_CASE M' f' f1')
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Datatype Package - Example II}
|
||||||
|
\begin{block}{Enumeration type in SML}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
datatype my_enum = E1 | E2 | E3
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Enumeration type in HOL}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
Datatype `my_enum = E1 | E2 | E3`
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Datatype Package - Example II - Derived Theorems}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{my\_enum\_nchotomy}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- !P. P E1 \holAnd{} P E2 \holAnd{} P E3 ==> !a. P a
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{my\_enum\_distinct}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- E1 <> E2 \holAnd{} E1 <> E3 \holAnd{} E2 <> E3
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{my\_enum2num\_thm}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- (my_enum2num E1 = 0) \holAnd{} (my_enum2num E2 = 1) \holAnd{} (my_enum2num E3 = 2)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{my\_enum2num\_num2my\_enum}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- !r. r < 3 <=> (my_enum2num (num2my_enum r) = r)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Datatype Package - Example III}
|
||||||
|
\begin{block}{Record type in SML}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
type rgb = \{ r : int, g : int, b : int \}
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Record type in HOL}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
Datatype `rgb = <| r : num; g : num; b : num |>`
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Datatype Package - Example III - Derived Theorems}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{rgb\_component\_equality}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- !r1 r2. (r1 = r2) <=>
|
||||||
|
(r1.r = r2.r) \holAnd{} (r1.g = r2.g) \holAnd{} (r1.b = r2.b)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{rgb\_nchotomy}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- !rr. ?n n0 n1. rr = rgb n n0 n1
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{rgb\_r\_fupd}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- !f n n0 n1. rgb n n0 n1 with r updated_by f = rgb (f n) n0 n1
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{\texttt{rgb\_updates\_eq\_literal}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- !r n1 n0 n.
|
||||||
|
r with <|r := n1; g := n0; b := n|> = <|r := n1; g := n0; b := n|>
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Datatype Package - Example IV}
|
||||||
|
\begin{itemize}
|
||||||
|
\item nested record types are not allowed
|
||||||
|
\item however, mutual recursive types can mitigate this restriction
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{Filesystem Datatype in SML}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
datatype file = Text of string
|
||||||
|
| Dir of \{owner : string ,
|
||||||
|
files : (string * file) list\}
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{alertblock}{\textbf{Not Supported} Nested Record Type Example in HOL}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
Datatype `file = Text string
|
||||||
|
| Dir <| owner : string ;
|
||||||
|
files : (string # file) list |>`
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{alertblock}
|
||||||
|
|
||||||
|
\begin{block}{Filesystem Datatype - Mutual Recursion in HOL}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
Datatype `file = Text string
|
||||||
|
| Dir directory
|
||||||
|
;
|
||||||
|
directory = <| owner : string ;
|
||||||
|
files : (string # file) list |>`
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Datatype Package - No support for Co-Algebraic Types}
|
||||||
|
\begin{itemize}
|
||||||
|
\item there is no support for co-algebraic types
|
||||||
|
\item the \texttt{Datatype} package could be extended to do so
|
||||||
|
\item other systems like Isabelle/HOL provide high-level methods for defining such types
|
||||||
|
\end{itemize}
|
||||||
|
\begin{block}{Co-algebraic Type Example in SML --- Lazy Lists}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
datatype 'a lazylist = Nil
|
||||||
|
| Cons of ('a * (unit -> 'a lazylist))
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Datatype Package - Discussion}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{Datatype} package allows to define many useful datatypes
|
||||||
|
\item however, there are many limitations
|
||||||
|
\begin{itemize}
|
||||||
|
\item some types cannot be defined in HOL, \eg empty types
|
||||||
|
\item some types are not supported, \eg co-algebraic types
|
||||||
|
\item there are bugs (currently \eg some trouble with certain mutually recursive definitions)
|
||||||
|
\end{itemize}
|
||||||
|
\item biggest restrictions in practice (in my opinion and my line of work)
|
||||||
|
\begin{itemize}
|
||||||
|
\item no support for co-algebraic datatypes
|
||||||
|
\item no nested record datatypes
|
||||||
|
\end{itemize}
|
||||||
|
\item depending on datatype, different sets of useful lemmata are derived
|
||||||
|
\item most important ones are added to \hol{TypeBase}
|
||||||
|
\begin{itemize}
|
||||||
|
\item tools like \hol{Induct\_on}, \hol{Cases\_on} use them
|
||||||
|
\item there is support for pattern matching
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Recursive Function Definitions}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Total Functional Language (\texttt{TFL}) package}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \ml{TFL} package implements support for terminating functional definitions
|
||||||
|
\item \hol{Define} defines functions from high-level descriptions
|
||||||
|
\item there is support for pattern matching
|
||||||
|
\item look and feel is like function definitions in SML
|
||||||
|
\item based on \emph{well-founded recursion} principle
|
||||||
|
\item \hol{Define} is the most common way for definitions in HOL
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Well-Founded Relations}
|
||||||
|
\begin{itemize}
|
||||||
|
\item a relation \texttt{R : 'a -> 'a -> bool} is called \emph{well-founded}, iff
|
||||||
|
there are no infinite descending chains\\[.5em]
|
||||||
|
\hol{wellfounded R = \holNeg{}?f.\ !n.\ R (f (SUC n)) (f n)}
|
||||||
|
\bigskip
|
||||||
|
\item Example: \texttt{\$< :\ num -> num -> bool} is well-founded
|
||||||
|
\item if arguments of recursive calls are smaller according to well-founded relation,
|
||||||
|
the recursion terminates
|
||||||
|
\item this is the essence of termination proofs
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Well-Founded Recursion}
|
||||||
|
\begin{itemize}
|
||||||
|
\item a well-founded relation \hol{R} can be used to define recursive functions
|
||||||
|
\item this recursion principle is called \hol{WFREC} in HOL
|
||||||
|
\item idea of \hol{WFREC}
|
||||||
|
\begin{itemize}
|
||||||
|
\item if arguments get smaller according to \hol{R}, perform recursive call
|
||||||
|
\item otherwise abort and return \hol{ARB}
|
||||||
|
\end{itemize}
|
||||||
|
\item \hol{WFREC} always defines a function
|
||||||
|
\item if all recursive calls indeed decrease according to \hol{R}, the original recursive
|
||||||
|
equations can be derived from the \hol{WFREC} representation
|
||||||
|
\item TFL uses this internally
|
||||||
|
\item however, this is well-hidden from the user
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{\texttt{Define} - Initial Examples}
|
||||||
|
|
||||||
|
\begin{block}{Simple Definitions}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val DOUBLE_def = Define `DOUBLE n = n + n`}
|
||||||
|
val DOUBLE_def =
|
||||||
|
|- !n. DOUBLE n = n + n:
|
||||||
|
thm
|
||||||
|
|
||||||
|
> \hol{val MY_LENGTH_def = Define `(MY_LENGTH [] = 0) \holAnd{}
|
||||||
|
(MY_LENGTH (x::xs) = SUC (MY_LENGTH xs))`}
|
||||||
|
val MY_LENGTH_def =
|
||||||
|
|- (MY_LENGTH [] = 0) \holAnd{} !x xs. MY_LENGTH (x::xs) = SUC (MY_LENGTH xs):
|
||||||
|
thm
|
||||||
|
|
||||||
|
> \hol{val MY_APPEND_def = Define `(MY_APPEND [] ys = ys) \holAnd{}
|
||||||
|
(MY_APPEND (x::xs) ys = x :: (MY_APPEND xs ys))`}
|
||||||
|
val MY_APPEND_def =
|
||||||
|
|- (!ys. MY_APPEND [] ys = ys) \holAnd{}
|
||||||
|
(!x xs ys. MY_APPEND (x::xs) ys = x::MY_APPEND xs ys):
|
||||||
|
thm
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{\texttt{Define} discussion}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{Define} feels like a function definition in HOL
|
||||||
|
\item it can be used to define "terminating" recursive functions
|
||||||
|
\item \hol{Define} is implemented by a large, non-trivial piece of SML code
|
||||||
|
\item it uses many heuristics
|
||||||
|
\item outcome of \hol{Define} sometimes hard to predict
|
||||||
|
\item the input descriptions are only hints
|
||||||
|
\begin{itemize}
|
||||||
|
\item the produced function and the definitional theorem might be different
|
||||||
|
\item in simple examples, quantifiers added
|
||||||
|
\item pattern compilation takes place
|
||||||
|
\item earlier ``conjuncts'' have precedence
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{\texttt{Define} - More Examples}
|
||||||
|
|
||||||
|
\begin{block}{}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val MY_HD_def = Define `MY_HD (x :: xs) = x`}
|
||||||
|
val MY_HD_def = |- !x xs. MY_HD (x::xs) = x : thm
|
||||||
|
|
||||||
|
> \hol{val IS_SORTED_def = Define `
|
||||||
|
(IS_SORTED (x1 :: x2 :: xs) = ((x1 < x2) \holAnd{} (IS_SORTED (x2::xs)))) \holAnd{}
|
||||||
|
(IS_SORTED _ = T)`}
|
||||||
|
val IS_SORTED_def =
|
||||||
|
|- (!xs x2 x1. IS_SORTED (x1::x2::xs) <=> x1 < x2 \holAnd{} IS_SORTED (x2::xs)) \holAnd{}
|
||||||
|
(IS_SORTED [] <=> T) \holAnd{} (!v. IS_SORTED [v] <=> T)
|
||||||
|
|
||||||
|
> \hol{val EVEN_def = Define `(EVEN 0 = T) \holAnd{} (ODD 0 = F) \holAnd{}
|
||||||
|
(EVEN (SUC n) = ODD n) \holAnd{} (ODD (SUC n) = EVEN n)`}
|
||||||
|
val EVEN_def =
|
||||||
|
|- (EVEN 0 <=> T) \holAnd{} (ODD 0 <=> F) \holAnd{} (!n. EVEN (SUC n) <=> ODD n) \holAnd{}
|
||||||
|
(!n. ODD (SUC n) <=> EVEN n) : thm
|
||||||
|
|
||||||
|
> \hol{val ZIP_def = Define `(ZIP (x::xs) (y::ys) = (x,y)::(ZIP xs ys)) \holAnd{}
|
||||||
|
(ZIP \_ \_ = [])`}
|
||||||
|
val ZIP_def =
|
||||||
|
|- (!ys y xs x. ZIP (x::xs) (y::ys) = (x,y)::ZIP xs ys) \holAnd{}
|
||||||
|
(!v1. ZIP [] v1 = []) \holAnd{} (!v4 v3. ZIP (v3::v4) [] = []) : thm
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Primitive Definitions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{Define} introduces (if needed) the function using \hol{WFREC}
|
||||||
|
\item intended definition derived as a theorem
|
||||||
|
\item the theorems are stored in current theory
|
||||||
|
\item usually, one never needs to look at it
|
||||||
|
\end{itemize}
|
||||||
|
\begin{block}{Examples}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val IS_SORTED_primitive_def =
|
||||||
|
|- IS_SORTED =
|
||||||
|
WFREC (@R.\ WF R \holAnd{} !x1 xs x2. R (x2::xs) (x1::x2::xs))
|
||||||
|
(\textbsl{}IS_SORTED a.
|
||||||
|
case a of
|
||||||
|
[] => I T
|
||||||
|
| [x1] => I T
|
||||||
|
| x1::x2::xs => I (x1 < x2 \holAnd{} IS_SORTED (x2::xs)))
|
||||||
|
|
||||||
|
|- !R M. WF R ==> !x. WFREC R M x = M (RESTRICT (WFREC R M) R x) x
|
||||||
|
|- !f R x. RESTRICT f R x = (\textbsl{}y. if R y x then f y else ARB)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Induction Theorems}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{Define} automatically defines induction theorems
|
||||||
|
\item these theorems are stored in current theory with suffix \hol{ind}
|
||||||
|
\item use \hol{DB.fetch "-" "something\_ind"} to retrieve them
|
||||||
|
\item these induction theorems are useful to reason about corresponding recursive functions
|
||||||
|
\end{itemize}
|
||||||
|
\begin{block}{Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val IS_SORTED_ind = |- !P.
|
||||||
|
((!x1 x2 xs. P (x2::xs) ==> P (x1::x2::xs)) \holAnd{}
|
||||||
|
P [] \holAnd{}
|
||||||
|
(!v. P [v])) ==>
|
||||||
|
!v. P v
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{\texttt{Define} failing}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{Define} might fail for various reasons to define a function
|
||||||
|
\begin{itemize}
|
||||||
|
\item such a function cannot be defined in HOL
|
||||||
|
\item such a function can be defined, but not via the methods used by TFL
|
||||||
|
\item TFL can define such a function, but its heuristics are too weak and user guidance is required
|
||||||
|
\item there is a bug :-)
|
||||||
|
\end{itemize}
|
||||||
|
\item \emph{termination} is an important concept for \hol{Define}
|
||||||
|
\item it is easy to misunderstand termination in the context of HOL
|
||||||
|
\item we need to understand what is meant by termination
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Termination in HOL}
|
||||||
|
\begin{itemize}
|
||||||
|
\item in SML it is natural to talk about termination of functions
|
||||||
|
\item in the HOL logic there is no concept of execution
|
||||||
|
\item thus, there is no concept of termination in HOL
|
||||||
|
\begin{minipage}{.8\textwidth}
|
||||||
|
\begin{exampleblock}{3 characterisations of a function \texttt{f :\ num -> num}}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{|- !n.\ f n = 0}
|
||||||
|
\item \hol{|- (f 0 = 0) \holAnd{} !n.\ (f (SUC n) = f n)}
|
||||||
|
\item \hol{|- (f 0 = 0) \holAnd{} !n.\ (f n = f (SUC n))}
|
||||||
|
\end{itemize}
|
||||||
|
Is \hol{f} terminating? All 3 theorems are equivalent.
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{minipage}\bigskip
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Termination in HOL II}
|
||||||
|
\begin{itemize}
|
||||||
|
\item it is useful to think in terms of termination
|
||||||
|
\item the TFL package implements heuristics to define functions that would terminate in SML
|
||||||
|
\item the TFL package uses well-founded recursion
|
||||||
|
\item the required well-founded relation corresponds to a termination proof
|
||||||
|
\item therefore, it is very natural to think of \hol{Define} searching a termination proof
|
||||||
|
\item important: this is the idea behind this function definition package, not a property of HOL
|
||||||
|
\end{itemize}
|
||||||
|
\bottomstatement{\alert{HOL is not limited to "terminating" functions}}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Termination in HOL III}
|
||||||
|
\begin{itemize}
|
||||||
|
\item one can define "non-terminating" functions in HOL
|
||||||
|
\item however, one cannot do so (easily) with \hol{Define}
|
||||||
|
\end{itemize}
|
||||||
|
\begin{exampleblock}{Definition of \texttt{WHILE} in HOL}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
\emph{|- !P g x. WHILE P g x = if P x then WHILE P g (g x) else x}
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{exampleblock}{Execution Order}\scriptsize
|
||||||
|
There is no "execution order". One can easily define a complicated constant function:
|
||||||
|
\vspace{-.7em}
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\emph{(myk : num -> num) (n:num) = (let x = myk (n+1) in 0)}
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
|
||||||
|
\begin{alertblock}{Unsound Definitions}\scriptsize
|
||||||
|
A function \hol{f : num -> num} with the following property cannot be defined in HOL unless HOL has an inconsistancy:
|
||||||
|
\begin{semiverbatim}
|
||||||
|
\hol{!n. f n = ((f n) + 1)}
|
||||||
|
\end{semiverbatim}
|
||||||
|
Such a function would allow to prove \hol{0 = 1}.
|
||||||
|
\end{alertblock}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Manual Termination Proofs I}
|
||||||
|
\begin{itemize}
|
||||||
|
\item TFL uses various heuristics to find a well-founded relation
|
||||||
|
\item however, these heuristics may not be strong enough
|
||||||
|
\item in such cases the user can provide a well-founded relation manually
|
||||||
|
\item the most common well-founded relations are \emph{measures}
|
||||||
|
\item measures map values to natural numbers and use the less relation\\
|
||||||
|
\hol{|- !(\alert{f:'a -> num}) x y.\ measure f x y <=> (f x < f y)}
|
||||||
|
\item all measures are well-founded: \hol{|- !f.\ WF (measure f)}
|
||||||
|
\item moreover, existing well-founded relations can be combined
|
||||||
|
\begin{itemize}
|
||||||
|
\item lexicographic order \hol{LEX}
|
||||||
|
\item list lexicographic order \hol{LLEX}
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Manual Termination Proofs II}
|
||||||
|
\begin{itemize}
|
||||||
|
\item if \hol{Define} fails to find a termination proof, \hol{Hol\_defn} can be used
|
||||||
|
\item \hol{Hol\_defn} defers termination proofs
|
||||||
|
\item it derives termination conditions and
|
||||||
|
sets up the function definitions
|
||||||
|
\item all results are packaged as a value of type \hol{defn}
|
||||||
|
\item after calling \hol{Hol\_defn} the defined function(s) can be used
|
||||||
|
\item however, the intended definition theorem has not been derived yet
|
||||||
|
\item to derive it, one needs to
|
||||||
|
\begin{itemize}
|
||||||
|
\item provide a well-founded relation
|
||||||
|
\item show that termination conditions respect that relation
|
||||||
|
\end{itemize}
|
||||||
|
\item \hol{Defn.tprove} and \hol{Defn.tgoal} are intended for this
|
||||||
|
\item proofs usually start by providing relation via tactic \hol{WF\_REL\_TAC}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Manual Termination Proof Example 1}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val qsort_defn = Hol_defn "qsort" `
|
||||||
|
(qsort ord [] = []) \holAnd{}
|
||||||
|
(qsort ord (x::rst) =
|
||||||
|
(qsort ord (FILTER (\$\holNeg{} o ord x) rst)) ++
|
||||||
|
[x] ++
|
||||||
|
(qsort ord (FILTER (ord x) rst)))`}
|
||||||
|
|
||||||
|
val qsort_defn = HOL function definition (recursive)
|
||||||
|
|
||||||
|
Equation(s) :
|
||||||
|
[...] |- qsort ord [] = []
|
||||||
|
[...] |- qsort ord (x::rst) =
|
||||||
|
qsort ord (FILTER ($\holNeg{} o ord x) rst) ++ [x] ++
|
||||||
|
qsort ord (FILTER (ord x) rst)
|
||||||
|
|
||||||
|
Induction : ...
|
||||||
|
|
||||||
|
Termination conditions :
|
||||||
|
0. !rst x ord. R (ord,FILTER (ord x) rst) (ord,x::rst)
|
||||||
|
1. !rst x ord. R (ord,FILTER ($\holNeg{} o ord x) rst) (ord,x::rst)
|
||||||
|
2. WF R
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Manual Termination Proof Example 2}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{Defn.tgoal qsort_defn}
|
||||||
|
|
||||||
|
Initial goal:
|
||||||
|
|
||||||
|
?R.
|
||||||
|
WF R \holAnd{}
|
||||||
|
(!rst x ord. R (ord,FILTER (ord x) rst) (ord,x::rst)) \holAnd{}
|
||||||
|
(!rst x ord. R (ord,FILTER (\$\holNeg{} o ord x) rst) (ord,x::rst))
|
||||||
|
|
||||||
|
\pause
|
||||||
|
> \hol{e (WF_REL_TAC `measure (\textbsl{}(\_, l).\ LENGTH l)`)}
|
||||||
|
|
||||||
|
1 subgoal :
|
||||||
|
|
||||||
|
(!rst x ord. LENGTH (FILTER (ord x) rst) < LENGTH (x::rst)) \holAnd{}
|
||||||
|
(!rst x ord. LENGTH (FILTER (\textbsl{}x'.\ \holNeg{}ord x x') rst) < LENGTH (x::rst))
|
||||||
|
|
||||||
|
> \hol{...}
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Manual Termination Proof Example 3}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val (qsort_def, qsort_ind) =
|
||||||
|
Defn.tprove (qsort_defn,
|
||||||
|
WF_REL_TAC `measure (\textbsl{}(\_, l).\ LENGTH l)`) >> ...)}
|
||||||
|
|
||||||
|
val qsort_def =
|
||||||
|
|- (qsort ord [] = []) \holAnd{}
|
||||||
|
(qsort ord (x::rst) =
|
||||||
|
qsort ord (FILTER ($~ o ord x) rst) ++ [x] ++
|
||||||
|
qsort ord (FILTER (ord x) rst))
|
||||||
|
|
||||||
|
val qsort_ind =
|
||||||
|
|- !P. (!ord. P ord []) \holAnd{}
|
||||||
|
(!ord x rst.
|
||||||
|
P ord (FILTER (ord x) rst) \holAnd{}
|
||||||
|
P ord (FILTER ($~ o ord x) rst) ==>
|
||||||
|
P ord (x::rst)) ==>
|
||||||
|
!v v1. P v v1
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
511
lectures/11_good_definitions.tex
Normal file
511
lectures/11_good_definitions.tex
Normal file
@ -0,0 +1,511 @@
|
|||||||
|
\part{Good Definitions}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\section{General Discussion}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Importance of Good Definitions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item using \textit{good} definitions is very important
|
||||||
|
\begin{itemize}
|
||||||
|
\item good definitions are vital for \emph{clarity}
|
||||||
|
\item \emph{proofs} depend a lot on the form of definitions
|
||||||
|
\end{itemize}
|
||||||
|
\item unluckily, it is hard to state what a good definition is
|
||||||
|
\item even harder to come up with good definitions
|
||||||
|
\item let's look at it a bit closer anyhow
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Importance of Good Definitions --- Clarity I}
|
||||||
|
\begin{itemize}
|
||||||
|
\item HOL guarantees that theorems do indeed hold
|
||||||
|
\item However, does the theorem mean what you think it does?
|
||||||
|
\item you can separate your development in
|
||||||
|
\begin{itemize}
|
||||||
|
\item main theorems you care for
|
||||||
|
\item auxiliary stuff used to derive your main theorems
|
||||||
|
\end{itemize}
|
||||||
|
\item it is essential to understand your main theorems
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Importance of Good Definitions --- Clarity II}
|
||||||
|
\begin{minipage}[t]{.45\textwidth}
|
||||||
|
\begin{block}{Guarded by HOL}
|
||||||
|
\begin{itemize}
|
||||||
|
\item proofs checked
|
||||||
|
\item internal, technical definitions
|
||||||
|
\item technical lemmata
|
||||||
|
\item proof tools
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{minipage}
|
||||||
|
\qquad
|
||||||
|
\begin{minipage}[t]{.45\textwidth}
|
||||||
|
\begin{block}{Manual review needed for}
|
||||||
|
\begin{itemize}
|
||||||
|
\item meaning of main theorems
|
||||||
|
\item meaning of definitions used by main theorems
|
||||||
|
\item meaning of types used by main theorems
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{minipage}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Importance of Good Definitions --- Clarity III}
|
||||||
|
\begin{itemize}
|
||||||
|
\item it is essential to understand your main theorems
|
||||||
|
\begin{itemize}
|
||||||
|
\item you need to understand all the definitions directly used
|
||||||
|
\item you need to understand the indirectly used ones as well
|
||||||
|
\item you need to convince others that you express the intended statement
|
||||||
|
\item therefore, it is vital to \alert{use very simple, clear definitions}
|
||||||
|
\end{itemize}
|
||||||
|
\item defining concepts is often the main development task
|
||||||
|
\item checking resulting model against real artefact is vital
|
||||||
|
\begin{itemize}
|
||||||
|
\item testing via \eg \hol{EVAL}
|
||||||
|
\item formal sanity
|
||||||
|
\item conformance testing
|
||||||
|
\end{itemize}
|
||||||
|
\item wrong models are main source of error when using HOL
|
||||||
|
\item proofs, auxiliary lemmata and auxiliary definitions
|
||||||
|
\begin{itemize}
|
||||||
|
\item can be as technical and complicated as you like
|
||||||
|
\item correctness is guaranteed by HOL
|
||||||
|
\item reviewers don't need to care
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Importance of Good Definitions --- Proofs}
|
||||||
|
\begin{itemize}
|
||||||
|
\item good definitions can shorten proofs significantly
|
||||||
|
\item they improve maintainability
|
||||||
|
\item they can improve automation drastically
|
||||||
|
\item unluckily for proofs definitions often need to be technical
|
||||||
|
\item this contradicts clarity aims
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{How to come up with good definitions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item unluckily, it is hard to state what a good definition is
|
||||||
|
\item it is even harder to come up with them
|
||||||
|
\begin{itemize}
|
||||||
|
\item there are often many competing interests
|
||||||
|
\item a lot of experience and detailed tool knowledge is needed
|
||||||
|
\item much depends on personal style and taste
|
||||||
|
\end{itemize}
|
||||||
|
\item general advice: use more than one definition
|
||||||
|
\begin{itemize}
|
||||||
|
\item in HOL you can derive equivalent definitions as theorems
|
||||||
|
\item define a concept as clearly and easily as possible
|
||||||
|
\item derive equivalent definitions for various purposes
|
||||||
|
\begin{itemize}
|
||||||
|
\item one very close to your favourite textbook
|
||||||
|
\item one nice for certain types of proofs
|
||||||
|
\item another one good for evaluation
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\item lessons from functional programming apply
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Functional Programming}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Good Definitions in Functional Programming}
|
||||||
|
|
||||||
|
\begin{block}{Objectives}
|
||||||
|
\begin{itemize}
|
||||||
|
\item clarity (readability, maintainability)
|
||||||
|
\item performance (runtime speed, memory usage, ...)
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{General Advice}
|
||||||
|
\begin{itemize}
|
||||||
|
\item use the powerful type-system
|
||||||
|
\item use many small function definitions
|
||||||
|
\item encode invariants in types and function signatures
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Good Definitions -- no number encodings}
|
||||||
|
\begin{itemize}
|
||||||
|
\item many programmers familiar with C encode everything as a number
|
||||||
|
\item enumeration types are very cheap in SML and HOL
|
||||||
|
\item use them instead
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Example Enumeration Types}\scriptsize
|
||||||
|
In C the result of an order comparison is an integer with 3 equivalence classes:
|
||||||
|
0, negative and positive integers. In SML and HOL, it is better to use a variant type.
|
||||||
|
\begin{semiverbatim}
|
||||||
|
val _ = Datatype `ordering = LESS | EQUAL | GREATER`;
|
||||||
|
|
||||||
|
val compare_def = Define `
|
||||||
|
(compare LESS lt eq gt = lt)
|
||||||
|
\holAnd{} (compare EQUAL lt eq gt = eq)
|
||||||
|
\holAnd{} (compare GREATER lt eq gt = gt) `;
|
||||||
|
|
||||||
|
val list_compare_def = Define `
|
||||||
|
(list_compare cmp [] [] = EQUAL) \holAnd{} (list_compare cmp [] l2 = LESS)
|
||||||
|
\holAnd{} (list_compare cmp l1 [] = GREATER)
|
||||||
|
\holAnd{} (list_compare cmp (x::l1) (y::l2) = compare (cmp (x:'a) y)
|
||||||
|
(* x<y *) LESS
|
||||||
|
(* x=y *) (list_compare cmp l1 l2)
|
||||||
|
(* x>y *) GREATER) `;
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Good Definitions --- Isomorphic Types}
|
||||||
|
\begin{itemize}
|
||||||
|
\item the type-checker is your friend
|
||||||
|
\begin{itemize}
|
||||||
|
\item it helps you find errors
|
||||||
|
\item code becomes more robust
|
||||||
|
\item using good types is a great way of writing self-documenting code
|
||||||
|
\end{itemize}
|
||||||
|
\item therefore, use many types
|
||||||
|
\item even use types isomorphic to existing ones
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Virtual and Physical Memory Addresses}\scriptsize
|
||||||
|
Virtual and physical addresses might in a development both be numbers. It is
|
||||||
|
still nice to use separate types to avoid mixing them up.
|
||||||
|
\begin{semiverbatim}
|
||||||
|
val _ = Datatype `vaddr = VAddr num`;
|
||||||
|
val _ = Datatype `paddr = PAddr num`;
|
||||||
|
|
||||||
|
val virt_to_phys_addr_def = Define `
|
||||||
|
virt_to_phys_addr (VAddr a) = PAddr( \textrm{\textit{translation of}} a )`;
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Good Definitions --- Record Types I}
|
||||||
|
\begin{itemize}
|
||||||
|
\item often people use tuples where records would be more appropriate
|
||||||
|
\item using large tuples quickly becomes awkward
|
||||||
|
\begin{itemize}
|
||||||
|
\item it is easy to mix up order of tuple entries
|
||||||
|
\begin{itemize}
|
||||||
|
\item often types coincide, so type-checker does not help
|
||||||
|
\end{itemize}
|
||||||
|
\item no good error messages for tuples
|
||||||
|
\begin{itemize}
|
||||||
|
\item hard to decipher type mismatch messages for long product types
|
||||||
|
\item hard to figure out which entry is missing at which position
|
||||||
|
\item non-local error messages
|
||||||
|
\item variable in last entry can hide missing entries
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\item records sometimes require slightly more proof effort
|
||||||
|
\item however, records have many benefits
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Good Definitions --- Record Types II}
|
||||||
|
\begin{itemize}
|
||||||
|
\item using records
|
||||||
|
\begin{itemize}
|
||||||
|
\item introduces field names
|
||||||
|
\item provides automatically defined accessor and update functions
|
||||||
|
\item leads to better type-checking error messages
|
||||||
|
\end{itemize}
|
||||||
|
\item records improve readability
|
||||||
|
\begin{itemize}
|
||||||
|
\item accessors and update functions lead to shorter code
|
||||||
|
\item field names act as documentation
|
||||||
|
\end{itemize}
|
||||||
|
\item records improve maintainability
|
||||||
|
\begin{itemize}
|
||||||
|
\item improved error messages
|
||||||
|
\item much easier to add extra fields
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Good Definitions --- Encoding Invariants}
|
||||||
|
\begin{itemize}
|
||||||
|
\item try to encode as many invariants as possible in the types
|
||||||
|
\item this allows the type-checker to ensure them for you
|
||||||
|
\item you don't have to check them manually any more
|
||||||
|
\item your code becomes more robust and clearer
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Network Connections (Example by Yaron Minsky from Jane Street)}\scriptsize
|
||||||
|
Consider the following datatype for network connections. It has many implicit invariants.
|
||||||
|
\begin{semiverbatim}
|
||||||
|
datatype connection_state = Connected | Disconnected | Connecting;
|
||||||
|
|
||||||
|
type connection_info = \{
|
||||||
|
state : connection_state,
|
||||||
|
server : inet_address,
|
||||||
|
last_ping_time : time option,
|
||||||
|
last_ping_id : int option,
|
||||||
|
session_id : string option,
|
||||||
|
when_initiated : time option,
|
||||||
|
when_disconnected : time option
|
||||||
|
\}
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Good Definitions --- Encoding Invariants II}
|
||||||
|
\begin{exampleblock}{Network Connections (Example by Yaron Minsky from Jane Street) II}\scriptsize
|
||||||
|
The following definition of \texttt{connection\_info} makes the invariants explicit:
|
||||||
|
\begin{semiverbatim}
|
||||||
|
type connected = \{ last_ping : (time * int) option,
|
||||||
|
session_id : string \};
|
||||||
|
type disconnected = \{ when_disconnected : time \};
|
||||||
|
type connecting = \{ when_initiated : time \};
|
||||||
|
|
||||||
|
datatype connection_state =
|
||||||
|
Connected of connected
|
||||||
|
| Disconnected of disconneted
|
||||||
|
| Connecting of connecting;
|
||||||
|
|
||||||
|
type connection_info = \{
|
||||||
|
state : connection_state,
|
||||||
|
server : inet_address
|
||||||
|
\}
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\section{HOL}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Good Definitions in HOL}
|
||||||
|
|
||||||
|
\begin{block}{Objectives}
|
||||||
|
\begin{itemize}
|
||||||
|
\item clarity (readability)
|
||||||
|
\item good for proofs
|
||||||
|
\item performance (good for automation, easily evaluatable, ...)
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{General Advice}
|
||||||
|
\begin{itemize}
|
||||||
|
\item same advice as for functional programming applies
|
||||||
|
\item use even smaller definitions
|
||||||
|
\begin{itemize}
|
||||||
|
\item introduce auxiliary definitions for important function parts
|
||||||
|
\item use extra definitions for important constants
|
||||||
|
\item ...
|
||||||
|
\end{itemize}
|
||||||
|
\item tiny definitions
|
||||||
|
\begin{itemize}
|
||||||
|
\item allow keeping proof state small by unfolding only needed ones
|
||||||
|
\item allow many small lemmata
|
||||||
|
\item improve maintainability
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Good Definitions in HOL II}
|
||||||
|
|
||||||
|
\begin{block}{Technical Issues}
|
||||||
|
\begin{itemize}
|
||||||
|
\item write definitions such that they work well with HOL's tools
|
||||||
|
\item this requires you to know HOL well
|
||||||
|
\item a lot of experience is required
|
||||||
|
\item general advice
|
||||||
|
\begin{itemize}
|
||||||
|
\item avoid explicit case-expressions
|
||||||
|
\item prefer curried functions
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val ZIP_GOOD_def = Define `(ZIP (x::xs) (y::ys) = (x,y)::(ZIP xs ys)) \holAnd{}
|
||||||
|
(ZIP _ _ = [])`
|
||||||
|
|
||||||
|
val ZIP_BAD1_def = Define `ZIP xs ys = case (xs, ys) of
|
||||||
|
(x::xs, y::ys) => (x,y)::(ZIP xs ys)
|
||||||
|
| (_, _) => []`
|
||||||
|
|
||||||
|
val ZIP_BAD2_def = Define `(ZIP (x::xs, y::ys) = (x,y)::(ZIP (xs, ys))) \holAnd{}
|
||||||
|
(ZIP _ = [])`
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Good Definitions in HOL III}
|
||||||
|
|
||||||
|
\begin{block}{Multiple Equivalent Definitions}
|
||||||
|
\begin{itemize}
|
||||||
|
\item satisfy competing requirements by having multiple equivalent definitions
|
||||||
|
\item derive them as theorems
|
||||||
|
\item initial definition should be as clear as possible
|
||||||
|
\begin{itemize}
|
||||||
|
\item clarity allows simpler reviews
|
||||||
|
\item simplicity reduces the likelihood of errors
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Example - \texttt{ALL\_DISTINCT}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- (ALL_DISTINCT [] <=> T) \holAnd{}
|
||||||
|
(!h t. ALL_DISTINCT (h::t) <=> \holNeg{}MEM h t \holAnd{} ALL_DISTINCT t)
|
||||||
|
|
||||||
|
|- !l. ALL_DISTINCT l <=>
|
||||||
|
(!x. MEM x l ==> (FILTER (\$= x) l = [x]))
|
||||||
|
|
||||||
|
|- !ls. ALL_DISTINCT ls <=> (CARD (set ls) = LENGTH ls):
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Formal Sanity}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Formal Sanity}
|
||||||
|
|
||||||
|
\begin{block}{Formal Sanity}
|
||||||
|
\begin{itemize}
|
||||||
|
\item to ensure correctness test your definitions via \eg \hol{EVAL}
|
||||||
|
\item in HOL testing means symbolic evaluation, \ie proving lemmata
|
||||||
|
\item \emph{formally proving sanity check lemmata} is very beneficial
|
||||||
|
\begin{itemize}
|
||||||
|
\item they should express core properties of your definition
|
||||||
|
\item thereby they check your intuition against your actual definitions
|
||||||
|
\item these lemmata are often useful for following proofs
|
||||||
|
\item using them improves robustness and maintainability of your development
|
||||||
|
\end{itemize}
|
||||||
|
\item I highly recommend using formal sanity checks
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Formal Sanity Example I}
|
||||||
|
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val ALL_DISTINCT = Define `
|
||||||
|
(ALL_DISTINCT [] = T) \holAnd{}
|
||||||
|
(ALL_DISTINCT (h::t) = \holNeg{}MEM h t \holAnd{} ALL_DISTINCT t)`};
|
||||||
|
\end{semiverbatim}
|
||||||
|
|
||||||
|
\begin{block}{Example Sanity Check Lemmata}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- ALL_DISTINCT []\smallskip
|
||||||
|
|- !x xs. ALL_DISTINCT (x::xs) <=> \holNeg{}MEM x xs \holAnd{} ALL_DISTINCT xs\smallskip
|
||||||
|
|- !x. ALL_DISTINCT [x]\smallskip
|
||||||
|
|- !x xs. \holNeg{}(ALL_DISTINCT (x::x::xs))\smallskip
|
||||||
|
|- !l. ALL_DISTINCT (REVERSE l) <=> ALL_DISTINCT l\smallskip
|
||||||
|
|- !x l. ALL_DISTINCT (SNOC x l) <=> \holNeg{}MEM x l \holAnd{} ALL_DISTINCT l\smallskip
|
||||||
|
|- !l1 l2. ALL_DISTINCT (l1 ++ l2) <=>
|
||||||
|
ALL_DISTINCT l1 \holAnd{} ALL_DISTINCT l2 \holAnd{} !e. MEM e l1 ==> \holNeg{}MEM e l2
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Formal Sanity Example II 1}
|
||||||
|
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val ZIP_def = Define `
|
||||||
|
(ZIP [] ys = []) \holAnd{} (ZIP xs [] = []) \holAnd{}
|
||||||
|
(ZIP (x::xs) (y::ys) = (x, y)::(ZIP xs ys))`}
|
||||||
|
|
||||||
|
val ZIP_def =
|
||||||
|
|- (!ys. ZIP [] ys = []) \holAnd{} (!v3 v2. ZIP (v2::v3) [] = []) \holAnd{}
|
||||||
|
(!ys y xs x. ZIP (x::xs) (y::ys) = (x,y)::ZIP xs ys)
|
||||||
|
\end{semiverbatim}\vspace{-1em}
|
||||||
|
\begin{itemize}
|
||||||
|
\item above definition of \hol{ZIP} looks straightforward
|
||||||
|
\item small changes cause heuristics to produce different theorems
|
||||||
|
\item use formal sanity lemmata to compensate
|
||||||
|
\end{itemize}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val ZIP_def = Define `
|
||||||
|
(ZIP xs [] = []) \holAnd{} (ZIP [] ys = []) \holAnd{}
|
||||||
|
(ZIP (x::xs) (y::ys) = (x, y)::(ZIP xs ys))`}
|
||||||
|
|
||||||
|
val ZIP_def =
|
||||||
|
|- (!xs. ZIP xs [] = []) \holAnd{} (!v3 v2. ZIP [] (v2::v3) = []) \holAnd{}
|
||||||
|
(!ys y xs x. ZIP (x::xs) (y::ys) = (x,y)::ZIP xs ys0
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Formal Sanity Example II 2}
|
||||||
|
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val ZIP_def =
|
||||||
|
|- (!ys. ZIP [] ys = []) \holAnd{} (!v3 v2. ZIP (v2::v3) [] = []) \holAnd{}
|
||||||
|
(!ys y xs x. ZIP (x::xs) (y::ys) = (x,y)::ZIP xs ys)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\begin{block}{Example Formal Sanity Lemmata}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
|- (!xs. ZIP xs [] = []) \holAnd{} (!ys. ZIP [] ys = []) \holAnd{}
|
||||||
|
(!y ys x xs. ZIP (x::xs) (y::ys) = (x,y)::ZIP xs ys)\smallskip
|
||||||
|
|- !xs ys. LENGTH (ZIP xs ys) = MIN (LENGTH xs) (LENGTH ys)\smallskip
|
||||||
|
|- !x y xs ys. MEM (x, y) (ZIP xs ys) ==> (MEM x xs \holAnd{} MEM y ys)\smallskip
|
||||||
|
|- !xs1 xs2 ys1 ys2. LENGTH xs1 = LENGTH ys1 ==>
|
||||||
|
(ZIP (xs1++xs2) (ys1++ys2) = (ZIP xs1 ys1 ++ ZIP xs2 ys2))\smallskip
|
||||||
|
...
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\begin{itemize}
|
||||||
|
\item in your proofs use sanity lemmata, not original definition
|
||||||
|
\item this makes your development robust against
|
||||||
|
\begin{itemize}
|
||||||
|
\item small changes to the definition required later
|
||||||
|
\item changes to \hol{Define} and its heuristics
|
||||||
|
\item bugs in function definition package
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
195
lectures/12_deep_shallow.tex
Normal file
195
lectures/12_deep_shallow.tex
Normal file
@ -0,0 +1,195 @@
|
|||||||
|
\part{Deep and Shallow Embeddings}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Deep and Shallow Embeddings}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item often one models some kind of formal language
|
||||||
|
\item important design decision: use \emph{deep} or \emph{shallow} embedding
|
||||||
|
\item in a nutshell:
|
||||||
|
\begin{itemize}
|
||||||
|
\item shallow embeddings just model semantics
|
||||||
|
\item deep embeddings model syntax as well
|
||||||
|
\end{itemize}
|
||||||
|
\item a shallow embedding directly uses the HOL logic
|
||||||
|
\item a deep embedding
|
||||||
|
\begin{itemize}
|
||||||
|
\item defines a datatype for the syntax of the language
|
||||||
|
\item provides a function to map this syntax to a semantic
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Example: Embedding of Propositional Logic I}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item propositional logic is a subset of HOL
|
||||||
|
\item a shallow embedding is therefore trivial
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val sh_true_def = Define `sh_true = T`;
|
||||||
|
val sh_var_def = Define `sh_var (v:bool) = v`;
|
||||||
|
val sh_not_def = Define `sh_not b = \holNeg{}b`;
|
||||||
|
val sh_and_def = Define `sh_and b1 b2 = (b1 \holAnd{} b2)`;
|
||||||
|
val sh_or_def = Define `sh_or b1 b2 = (b1 \holOr{} b2)`;
|
||||||
|
val sh_implies_def = Define `sh_implies b1 b2 = (b1 ==> b2)`;
|
||||||
|
\end{semiverbatim}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Example: Embedding of Propositional Logic II}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item we can also define a datatype for propositional logic
|
||||||
|
\item this leads to a deep embedding
|
||||||
|
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val _ = Datatype `bvar = BVar num`
|
||||||
|
val _ = Datatype `prop = d_true | d_var bvar | d_not prop
|
||||||
|
| d_and prop prop | d_or prop prop
|
||||||
|
| d_implies prop prop`;
|
||||||
|
|
||||||
|
val _ = Datatype `var_assignment = BAssign (bvar -> bool)`
|
||||||
|
val VAR_VALUE_def = Define `VAR_VALUE (BAssign a) v = (a v)`
|
||||||
|
|
||||||
|
val PROP_SEM_def = Define `
|
||||||
|
(PROP_SEM a d_true = T) \holAnd{}
|
||||||
|
(PROP_SEM a (d_var v) = VAR_VALUE a v) \holAnd{}
|
||||||
|
(PROP_SEM a (d_not p) = \holNeg{}(PROP_SEM a p)) \holAnd{}
|
||||||
|
(PROP_SEM a (d_and p1 p2) = (PROP_SEM a p1 \holAnd{} PROP_SEM a p2)) \holAnd{}
|
||||||
|
(PROP_SEM a (d_or p1 p2) = (PROP_SEM a p1 \holOr{} PROP_SEM a p2)) \holAnd{}
|
||||||
|
(PROP_SEM a (d_implies p1 p2) = (PROP_SEM a p1 ==> PROP_SEM a p2))`
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Shallow vs.\ Deep Embeddings}
|
||||||
|
|
||||||
|
\newcommand{\dummyitem}{\item[] \leavevmode\phantom{gg}}
|
||||||
|
|
||||||
|
\begin{minipage}[t]{.46\textwidth}
|
||||||
|
\begin{block}{Shallow}
|
||||||
|
\begin{itemize}
|
||||||
|
\item quick and easy to build
|
||||||
|
\item extensions are simple
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{minipage}\qquad
|
||||||
|
\begin{minipage}[t]{.46\textwidth}
|
||||||
|
\begin{block}{Deep}
|
||||||
|
\begin{itemize}
|
||||||
|
\item can reason about syntax
|
||||||
|
\item allows verified implementations
|
||||||
|
\item sometimes tricky to define
|
||||||
|
\begin{itemize}
|
||||||
|
\item \eg bound variables
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{minipage}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\begin{block}{Important Questions for Deciding}
|
||||||
|
\begin{itemize}
|
||||||
|
\item Do I need to reason about syntax?
|
||||||
|
\item Do I have hard to define syntax like bound variables?
|
||||||
|
\item How much time do I have?
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Example: Embedding of Propositional Logic III}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item with deep embedding one can easily formalise syntactic properties like
|
||||||
|
\begin{itemize}
|
||||||
|
\item Which variables does a propositional formula contain?
|
||||||
|
\item Is a formula in negation-normal-form (NNF)?
|
||||||
|
\end{itemize}
|
||||||
|
\item with shallow embeddings
|
||||||
|
\begin{itemize}
|
||||||
|
\item syntactic concepts can't be defined in HOL
|
||||||
|
\item however, they can be defined in SML
|
||||||
|
\item no proofs about them possible
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val _ = Define `
|
||||||
|
(IS_NNF (d_not d_true) = T) \holAnd{} (IS_NNF (d_not (d_var v)) = T) \holAnd{}
|
||||||
|
(IS_NNF (d_not _) = F) \holAnd{}\medskip
|
||||||
|
(IS_NNF d_true = T) \holAnd{} (IS_NNF (d_var v) = T) \holAnd{}
|
||||||
|
(IS_NNF (d_and p1 p2) = (IS_NNF p1 \holAnd{} IS_NNF p2)) \holAnd{}
|
||||||
|
(IS_NNF (d_or p1 p2) = (IS_NNF p1 \holAnd{} IS_NNF p2)) \holAnd{}
|
||||||
|
(IS_NNF (d_implies p1 p2) = (IS_NNF p1 \holAnd{} IS_NNF p2))`
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Verified vs.\ Verifying Program}
|
||||||
|
\newcommand{\dummyitem}{\item[] \leavevmode\phantom{gg}}
|
||||||
|
|
||||||
|
\begin{minipage}[t]{.46\textwidth}
|
||||||
|
\begin{block}{Verified Programs}
|
||||||
|
\begin{itemize}
|
||||||
|
\item are formalised in HOL
|
||||||
|
\item their properties have been proven once and for all
|
||||||
|
\item all runs have proven properties
|
||||||
|
\item are usually less sophisticated, since they need verification
|
||||||
|
\item is what one wants ideally
|
||||||
|
\item often require deep embedding
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{minipage}\qquad
|
||||||
|
\begin{minipage}[t]{.46\textwidth}
|
||||||
|
\begin{block}{Verifying Programs}
|
||||||
|
\begin{itemize}
|
||||||
|
\item are written in meta-language
|
||||||
|
\item they produce a separate proof for each run
|
||||||
|
\item only certain that current run has properties
|
||||||
|
\item allow more flexibility, \eg fancy heuristics
|
||||||
|
\item good pragmatic solution
|
||||||
|
\item shallow embedding fine
|
||||||
|
\end{itemize}
|
||||||
|
\end{block}
|
||||||
|
\end{minipage}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Summary Deep vs.\ Shallow Embeddings}
|
||||||
|
\begin{itemize}
|
||||||
|
\item deep embeddings require more work
|
||||||
|
\item they however allow reasoning about syntax
|
||||||
|
\begin{itemize}
|
||||||
|
\item induction and case-splits possible
|
||||||
|
\item a semantic subset can be carved out syntactically
|
||||||
|
\end{itemize}
|
||||||
|
\item syntax sometimes hard to define for deep embeddings
|
||||||
|
\item combinations of deep and shallow embeddings common
|
||||||
|
\begin{itemize}
|
||||||
|
\item certain parts are deeply embedded
|
||||||
|
\item others are embedded shallowly
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
1208
lectures/13_rewriting.tex
Normal file
1208
lectures/13_rewriting.tex
Normal file
File diff suppressed because it is too large
Load Diff
469
lectures/14_advanced_definitions.tex
Normal file
469
lectures/14_advanced_definitions.tex
Normal file
@ -0,0 +1,469 @@
|
|||||||
|
\part{Advanced Definition Principles}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\section{Inductive and Coinductive Relations}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Relations}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item a relation is a function from some arguments to \texttt{bool}
|
||||||
|
\item the following example types are all types of relations:
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{:\ 'a -> 'a -> bool}
|
||||||
|
\item \hol{:\ 'a -> 'b -> bool}
|
||||||
|
\item \hol{:\ 'a -> 'b -> 'c -> 'd -> bool}
|
||||||
|
\item \hol{:\ ('a \# 'b \# 'c) -> bool}
|
||||||
|
\item \hol{:\ bool}
|
||||||
|
\item \hol{:\ 'a -> bool}
|
||||||
|
\end{itemize}
|
||||||
|
\item relations are closely related to sets
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{R a b c <=> (a, b, c) IN \{(a, b, c) | R a b c\}}
|
||||||
|
\item \hol{(a, b, c) IN S <=> (\textbsl{}a b c.\ (a, b, c) IN S) a b c}
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Relations II}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item relations are often defined by a set of \emph{rules}
|
||||||
|
\begin{minipage}{.9\textwidth}
|
||||||
|
\begin{exampleblock}{Definition of Reflexive-Transitive Closure}
|
||||||
|
The transitive reflexive closure of a relation \hol{R : 'a -> 'a -> bool} can
|
||||||
|
be defined as the least relation \hol{RTC R} that satisfies the following rules:
|
||||||
|
\bigskip
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*{\hol{R x y}}{\hol{RTC R x y}}$\quad
|
||||||
|
$\inferrule*{\ }{\hol{RTC R x x}}$\quad
|
||||||
|
$\inferrule*{\hol{RTC R x y}\\\hol{RTC R y z}}{\hol{RTC R x z}}$
|
||||||
|
\end{center}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{minipage}
|
||||||
|
\item if the rules are monoton, a least and a greatest fix point exists (Knaster-Tarski theorem)
|
||||||
|
\item least fixpoints give rise to \emph{inductive relations}
|
||||||
|
\item greatest fixpoints give rise to \emph{coinductive relations}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{(Co)inductive Relations in HOL}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \ml{(Co)IndDefLib} provides infrastructure for defining (co)inductive relations
|
||||||
|
\item given a set of rules \hol{Hol\_(co)reln} defines (co)inductive relations
|
||||||
|
\item 3 theorems are returned and stored in current theory
|
||||||
|
\begin{itemize}
|
||||||
|
\item a rules theorem --- it states that the defined constant satisfies the rules
|
||||||
|
\item a cases theorem --- this is an equational form of the rules showing that the defined relation is indeed a fixpoint
|
||||||
|
\item a (co)induction theorem
|
||||||
|
\end{itemize}
|
||||||
|
\item additionally a strong (co)induction theorem is stored in current theory
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Example: Transitive Reflexive Closure}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val (RTC_REL_rules, RTC_REL_ind, RTC_REL_cases) = Hol_reln `
|
||||||
|
(!x y. R x y ==> RTC_REL R x y) \holAnd{}
|
||||||
|
(!x. RTC_REL R x x) \holAnd{}
|
||||||
|
(!x y z. RTC_REL R x y \holAnd{} RTC_REL R y z ==> RTC_REL R x z)`}
|
||||||
|
|
||||||
|
val RTC_REL_rules = |- !R.
|
||||||
|
(!x y. R x y ==> RTC_REL R x y) \holAnd{} (!x. RTC_REL R x x) \holAnd{}
|
||||||
|
(!x y z. RTC_REL R x y \holAnd{} RTC_REL R y z ==> RTC_REL R x z)
|
||||||
|
|
||||||
|
val RTC_REL_cases = |- !R a0 a1.
|
||||||
|
RTC_REL R a0 a1 <=>
|
||||||
|
(R a0 a1 \holOr{} (a1 = a0) \holOr{} ?y. RTC_REL R a0 y \holAnd{} RTC_REL R y a1)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Example: Transitive Reflexive Closure II}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val RTC_REL_ind = |- !R RTC_REL'.
|
||||||
|
((!x y. R x y ==> RTC_REL' x y) \holAnd{} (!x. RTC_REL' x x) \holAnd{}
|
||||||
|
(!x y z. RTC_REL' x y \holAnd{} RTC_REL' y z ==> RTC_REL' x z)) ==>
|
||||||
|
(!a0 a1. RTC_REL R a0 a1 ==> RTC_REL' a0 a1)
|
||||||
|
|
||||||
|
|
||||||
|
> \hol{val RTC_REL_strongind = DB.fetch "-" "RTC_REL_strongind"}
|
||||||
|
|
||||||
|
val RTC_REL_strongind = |- !R RTC_REL'.
|
||||||
|
(!x y. R x y ==> RTC_REL' x y) \holAnd{} (!x. RTC_REL' x x) \holAnd{}
|
||||||
|
(!x y z.
|
||||||
|
RTC_REL R x y \holAnd{} RTC_REL' x y \holAnd{} RTC_REL R y z \holAnd{}
|
||||||
|
RTC_REL' y z ==>
|
||||||
|
RTC_REL' x z) ==>
|
||||||
|
( !a0 a1. RTC_REL R a0 a1 ==> RTC_REL' a0 a1)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Example: \hol{EVEN}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val (EVEN_REL_rules, EVEN_REL_ind, EVEN_REL_cases) = Hol_reln
|
||||||
|
`(EVEN_REL 0) \holAnd{} (!n. EVEN_REL n ==> (EVEN_REL (n + 2)))`};
|
||||||
|
|
||||||
|
val EVEN_REL_cases =
|
||||||
|
|- !a0. EVEN_REL a0 <=> (a0 = 0) \holOr{} ?n. (a0 = n + 2) \holAnd{} EVEN_REL n
|
||||||
|
|
||||||
|
val EVEN_REL_rules =
|
||||||
|
|- EVEN_REL 0 \holAnd{} !n. EVEN_REL n ==> EVEN_REL (n + 2)
|
||||||
|
|
||||||
|
val EVEN_REL_ind = |- !EVEN_REL'.
|
||||||
|
(EVEN_REL' 0 \holAnd{} (!n. EVEN_REL' n ==> EVEN_REL' (n + 2))) ==>
|
||||||
|
(!a0. EVEN_REL a0 ==> EVEN_REL' a0)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\begin{itemize}
|
||||||
|
\item notice that in this example there is exactly one fixpoint
|
||||||
|
\item therefore, for these rules the inductive and coinductive relation coincide
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Example: Dummy Relations}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val (DF_rules, DF_ind, DF_cases) = Hol_reln
|
||||||
|
`(!n. DF (n+1) ==> (DF n))`}
|
||||||
|
|
||||||
|
> \hol{val (DT_rules, DT_coind, DT_cases) = Hol_coreln
|
||||||
|
`(!n. DT (n+1) ==> (DT n))`}
|
||||||
|
|
||||||
|
val DT_coind =
|
||||||
|
|- !DT'. (!a0. DT' a0 ==> DT' (a0 + 1)) ==> !a0. DT' a0 ==> DT a0
|
||||||
|
|
||||||
|
val DF_ind =
|
||||||
|
|- !DF'. (!n. DF' (n + 1) ==> DF' n) ==> !a0. DF a0 ==> DF' a0
|
||||||
|
|
||||||
|
val DT_cases = |- !a0. DT a0 <=> DT (a0 + 1):
|
||||||
|
val DF_cases = |- !a0. DF a0 <=> DF (a0 + 1):
|
||||||
|
\end{semiverbatim}
|
||||||
|
\begin{itemize}
|
||||||
|
\item notice that the definitions of \hol{DT} and \hol{DF} look like a non-terminating recursive definition
|
||||||
|
\item \hol{DT} is always true, \ie \hol{|- !n.\ DT n}
|
||||||
|
\item \hol{DF} is always false, \ie \hol{|- !n.\ \holNeg{}(DF n)}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\section{Quotient Types}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Quotient Types}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \ml{quotientLib} allows to define types as quotients of existing types with respect to \emph{partial equivalence relation}
|
||||||
|
\item each equivalence class becomes a value of the new type
|
||||||
|
\item partiality allows ignoring certain values of original type
|
||||||
|
\item \ml{quotientLib} allows to lift definitions and lemmata as well
|
||||||
|
\item details are technical and won't be presented here
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Quotient Types Example}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item let's assume we have an implementation of finite sets of numbers as
|
||||||
|
binary trees with
|
||||||
|
\begin{itemize}
|
||||||
|
\item type \hol{binset}
|
||||||
|
\item binary tree invariant \hol{WF\_BINSET :\ binset -> bool}
|
||||||
|
\item constant \hol{empty\_binset}
|
||||||
|
\item add and member functions \hol{add :\ num -> binset -> binset},\\ \hol{mem :\ binset -> num -> bool}
|
||||||
|
\end{itemize}
|
||||||
|
\item we can define a partial equivalence relation by\\
|
||||||
|
\hol{binset\_equiv b1 b2 := (\\
|
||||||
|
\-\ \ WF\_BINSET b1 \holAnd{} WF\_BINSET b2 \holAnd{}\\
|
||||||
|
\-\ \ (!n.\ mem b1 n <=> mem b2 n))}
|
||||||
|
\item this allows defining a quotient type of sets of numbers
|
||||||
|
\item functions \hol{empty\_binset}, \hol{add} and \hol{mem} as well as lemmata about them can be lifted automatically
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Quotient Types Summary}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item quotient types are sometimes very useful
|
||||||
|
\begin{itemize}
|
||||||
|
\item \eg rational numbers are defined as a quotient type
|
||||||
|
\end{itemize}
|
||||||
|
\item there is powerful infrastructure for them
|
||||||
|
\item many tasks are automated
|
||||||
|
\item however, the details are technical and won't be discussed here
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\section{Case Expressions}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Pattern Matching / Case Expressions}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item pattern matching ubiquitous in functional programming
|
||||||
|
\item pattern matching is a powerful technique
|
||||||
|
\item it helps to write concise, readable definitions
|
||||||
|
\item very handy and frequently used for interactive theorem proving
|
||||||
|
\item however, it is \alert{not directly supported} by HOL's logic
|
||||||
|
\item representations in HOL
|
||||||
|
\begin{itemize}
|
||||||
|
\item sets of equations as produced by \hol{Define}
|
||||||
|
\item decision trees (printed as case-expressions)
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{TFL / \texttt{Define}}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item we have already used top-level pattern matches with the TFL package
|
||||||
|
\item \hol{Define} is able to handle them
|
||||||
|
\begin{itemize}
|
||||||
|
\item all the semantic complexity is taken care of
|
||||||
|
\item no special syntax or functions remain
|
||||||
|
\item no special rewrite rules, reasoning tools needed afterwards
|
||||||
|
\end{itemize}
|
||||||
|
\item \hol{Define} produces a set of equations
|
||||||
|
\item this is the recommended way of using pattern matching in HOL
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val ZIP_def = Define `(ZIP (x::xs) (y::ys) = (x,y)::(ZIP xs ys)) \holAnd{}
|
||||||
|
(ZIP [] [] = [])`}
|
||||||
|
val ZIP_def = |- (!ys y xs x. ZIP (x::xs) (y::ys) = (x,y)::ZIP xs ys) \holAnd{}
|
||||||
|
(ZIP [] [] = [])
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expressions}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item sometimes one does not want to use this compilation by TFL
|
||||||
|
\begin{itemize}
|
||||||
|
\item one wants to use pattern-matches somewhere nested in a term
|
||||||
|
\item one might not want to introduce a new constant
|
||||||
|
\item one might want to avoid using TFL for technical reasons
|
||||||
|
\end{itemize}
|
||||||
|
\item in such situations, case-expressions can be used
|
||||||
|
\item their syntax is similar to the syntax used by SML
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val ZIP_def = Define `ZIP xs ys = case (xs, ys) of
|
||||||
|
(x::xs, y::ys) => (x,y)::(ZIP xs ys)
|
||||||
|
| ([], []) => []`}
|
||||||
|
val ZIP_def = |- !ys xs. ZIP xs ys =
|
||||||
|
case (xs,ys) of
|
||||||
|
([],[]) => []
|
||||||
|
| ([],v4::v5) => ARB
|
||||||
|
| (x::xs',[]) => ARB
|
||||||
|
| (x::xs',y::ys') => (x,y)::ZIP xs' ys'
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expressions II}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item the datatype package defines case-constants for each datatype
|
||||||
|
\item the parser contains a pattern compilation algorithm
|
||||||
|
\item case-expressions are by the parser compiled to decision trees using case-constants
|
||||||
|
\item pretty printer prints these decision trees as case-expressions again
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val ZIP_def = |- !ys xs. ZIP xs ys =
|
||||||
|
pair_CASE (xs,ys)
|
||||||
|
(\textbsl{}v v1.
|
||||||
|
list_CASE v (list_CASE v1 [] (\textbsl{}v4 v5. ARB))
|
||||||
|
(\textbsl{}x xs'. list_CASE v1 ARB (\textbsl{}y ys'. (x,y)::ZIP xs' ys'))):
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Issues}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item using case expressions feels very natural to functional programmers
|
||||||
|
\item case-expressions allow concise, well-readable definitions
|
||||||
|
\item however, there are also many drawbacks
|
||||||
|
\item there is large, complicated code in the parser and pretty printer
|
||||||
|
\begin{itemize}
|
||||||
|
\item this is outside the kernel
|
||||||
|
\item parsing a pretty-printed term can result in a non $\alpha$-equivalent one
|
||||||
|
\item there are bugs in this code (see \eg Issue \#416 reported 8 May 2017)
|
||||||
|
\end{itemize}
|
||||||
|
\item the results are hard to predict
|
||||||
|
\begin{itemize}
|
||||||
|
\item heuristics involved in creating decision tree
|
||||||
|
\item however, it is beneficial that proofs follow this internal, volatile structure
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Issues II}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item technical issues
|
||||||
|
\begin{itemize}
|
||||||
|
\item it is tricky to reason about decision trees
|
||||||
|
\item rewrite rules about case-constants needs to be fetched from \hol{TypeBase}
|
||||||
|
\begin{itemize}
|
||||||
|
\item alternative \hol{srw\_ss} often does more than wanted
|
||||||
|
\end{itemize}
|
||||||
|
\item partially evaluated decision-trees are not pretty printed nicely any more
|
||||||
|
\end{itemize}
|
||||||
|
\item underspecified functions
|
||||||
|
\begin{itemize}
|
||||||
|
\item decision trees are exhaustive
|
||||||
|
\item they list underspecified cases explicitly with value \hol{ARB}
|
||||||
|
\item this can be lengthy
|
||||||
|
\item \hol{Define} in contrast hides underspecified cases
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Example I}
|
||||||
|
|
||||||
|
\begin{block}{Partial Proof Script}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val _ = prove (``!l1 l2.
|
||||||
|
(LENGTH l1 = LENGTH l2) ==>
|
||||||
|
((ZIP l1 l2 = []) <=> ((l1 = []) \holAnd{} (l2 = [])))``,
|
||||||
|
|
||||||
|
ONCE_REWRITE_TAC [ZIP_def]
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
!l1 l2.
|
||||||
|
(LENGTH l1 = LENGTH l2) ==>
|
||||||
|
(((case (l1,l2) of
|
||||||
|
([],[]) => []
|
||||||
|
| ([],v4::v5) => ARB
|
||||||
|
| (x::xs',[]) => ARB
|
||||||
|
| (x::xs',y::ys') => (x,y)::ZIP xs' ys') =
|
||||||
|
[]) <=> (l1 = []) \holAnd{} (l2 = []))
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Example IIa -- partial evaluation}
|
||||||
|
|
||||||
|
\begin{block}{Partial Proof Script}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val _ = prove (``!l1 l2.
|
||||||
|
(LENGTH l1 = LENGTH l2) ==>
|
||||||
|
((ZIP l1 l2 = []) <=> ((l1 = []) \holAnd{} (l2 = [])))``,
|
||||||
|
|
||||||
|
ONCE_REWRITE_TAC [ZIP_def] >>
|
||||||
|
REWRITE_TAC[pairTheory.pair_case_def] >> BETA_TAC
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
!l1 l2.
|
||||||
|
(LENGTH l1 = LENGTH l2) ==>
|
||||||
|
(((case l1 of
|
||||||
|
[] => (case l2 of [] => [] | v4::v5 => ARB)
|
||||||
|
| x::xs' => case l2 of [] => ARB | y::ys' => (x,y)::ZIP xs' ys') =
|
||||||
|
[]) <=> (l1 = []) \holAnd{} (l2 = []))
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Example IIb --- following tree structure}
|
||||||
|
|
||||||
|
\begin{block}{Partial Proof Script}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val _ = prove (``!l1 l2.
|
||||||
|
(LENGTH l1 = LENGTH l2) ==>
|
||||||
|
((ZIP l1 l2 = []) <=> ((l1 = []) \holAnd{} (l2 = [])))``,
|
||||||
|
|
||||||
|
ONCE_REWRITE_TAC [ZIP_def] >>
|
||||||
|
Cases_on `l1` >| [
|
||||||
|
REWRITE_TAC[listTheory.list_case_def]
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
!l2.
|
||||||
|
(LENGTH [] = LENGTH l2) ==>
|
||||||
|
(((case ([],l2) of
|
||||||
|
([],[]) => []
|
||||||
|
| ([],v4::v5) => ARB
|
||||||
|
| (x::xs',[]) => ARB
|
||||||
|
| (x::xs',y::ys') => (x,y)::ZIP xs' ys') =
|
||||||
|
[]) <=> (l2 = []))
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Summary}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item case expressions are natural to functional programmers
|
||||||
|
\item they allow concise, readable definitions
|
||||||
|
\item however, fancy parser and pretty-printer needed
|
||||||
|
\begin{itemize}
|
||||||
|
\item trustworthiness issues
|
||||||
|
\item sanity check lemmata advisable
|
||||||
|
\end{itemize}
|
||||||
|
\item reasoning about case expressions can be tricky and lengthy
|
||||||
|
\item proofs about case expression often hard to maintain
|
||||||
|
\item therefore, use top-level pattern matching via \hol{Define} if easily possible
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
472
lectures/14a_final_project.tex
Normal file
472
lectures/14a_final_project.tex
Normal file
@ -0,0 +1,472 @@
|
|||||||
|
\part{Advanced Definition Principles}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\section{Inductive and Coinductive Relations}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Relations}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item a relation is a function from some arguments to \texttt{bool}
|
||||||
|
\item the following example types are all types of relations:
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{:\ 'a -> 'a -> bool}
|
||||||
|
\item \hol{:\ 'a -> 'b -> bool}
|
||||||
|
\item \hol{:\ 'a -> 'b -> 'c -> 'd -> bool}
|
||||||
|
\item \hol{:\ ('a \# 'b \# 'c) -> bool}
|
||||||
|
\item \hol{:\ bool}
|
||||||
|
\item \hol{:\ 'a -> bool}
|
||||||
|
\end{itemize}
|
||||||
|
\item relations are closely related to sets
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{R a b c <=> (a, b, c) IN \{(a, b, c) | R a b c\}}
|
||||||
|
\item \hol{(a, b, c) IN S <=> (\textbsl{}a b c.\ (a, b, c) IN S) a b c}
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Relations II}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item relations are often defined by a set of \emph{rules}
|
||||||
|
\begin{minipage}{.9\textwidth}
|
||||||
|
\begin{exampleblock}{Definition of Reflexive-Transitive Closure}
|
||||||
|
The transitive reflexive closure of a relation \hol{R : 'a -> 'a -> bool} can
|
||||||
|
be defined as the least relation \hol{RTC R} that satisfies the following rules:
|
||||||
|
\bigskip
|
||||||
|
\begin{center}
|
||||||
|
$\inferrule*{\hol{R x y}}{\hol{RTC R x y}}$\quad
|
||||||
|
$\inferrule*{\ }{\hol{RTC R x x}}$\quad
|
||||||
|
$\inferrule*{\hol{RTC R x y}\\\hol{RTC R y z}}{\hol{RTC R x z}}$
|
||||||
|
\end{center}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{minipage}
|
||||||
|
\item if the rules are monoton, a least and a greatest fix point exists (Knaster-Tarski theorem)
|
||||||
|
\item least fixpoints give rise to \emph{inductive relations}
|
||||||
|
\item greatest fixpoints give rise to \emph{coinductive relations}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{(Co)inductive Relations in HOL}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \ml{(Co)IndDefLib} provides infrastructure for defining (co)inductive relations
|
||||||
|
\item given a set of rules \hol{Hol\_(co)reln} defines (co)inductive relations
|
||||||
|
\item 3 theorems are returned and stored in current theory
|
||||||
|
\begin{itemize}
|
||||||
|
\item a rules theorem --- it states that the defined constant satisfies the rules
|
||||||
|
\item a cases theorem --- this is an equational form of the rules showing that the defined relation is indeed a fixpoint
|
||||||
|
\item a (co)induction theorem
|
||||||
|
\end{itemize}
|
||||||
|
\item additionally a strong (co)induction theorem is stored in current theory
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Example: Transitive Reflexive Closure}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val (RTC_REL_rules, RTC_REL_ind, RTC_REL_cases) = Hol_reln `
|
||||||
|
(!x y. R x y ==> RTC_REL R x y) \holAnd{}
|
||||||
|
(!x. RTC_REL R x x) \holAnd{}
|
||||||
|
(!x y z. RTC_REL R x y \holAnd{} RTC_REL R x z ==> RTC_REL R x z)`}
|
||||||
|
|
||||||
|
val RTC_REL_rules = |- !R.
|
||||||
|
(!x y. R x y ==> RTC_REL R x y) \holAnd{} (!x. RTC_REL R x x) \holAnd{}
|
||||||
|
(!x y z. RTC_REL R x y \holAnd{} RTC_REL R x z ==> RTC_REL R x z)
|
||||||
|
|
||||||
|
val RTC_REL_cases = |- !R a0 a1.
|
||||||
|
RTC_REL R a0 a1 <=>
|
||||||
|
(R a0 a1 \holOr{} (a1 = a0) \holOr{} ?y. RTC_REL R a0 y \holAnd{} RTC_REL R a0 a1)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Example: Transitive Reflexive Closure II}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val RTC_REL_ind = |- !R RTC_REL'.
|
||||||
|
((!x y. R x y ==> RTC_REL' x y) \holAnd{} (!x. RTC_REL' x x) \holAnd{}
|
||||||
|
(!x y z. RTC_REL' x y \holAnd{} RTC_REL' x z ==> RTC_REL' x z)) ==>
|
||||||
|
(!a0 a1. RTC_REL R a0 a1 ==> RTC_REL' a0 a1)
|
||||||
|
|
||||||
|
|
||||||
|
> \hol{val RTC_REL_strongind = DB.fetch "-" "RTC_REL_strongind"}
|
||||||
|
|
||||||
|
val RTC_REL_strongind = |- !R RTC_REL'.
|
||||||
|
(!x y. R x y ==> RTC_REL' x y) \holAnd{} (!x. RTC_REL' x x) \holAnd{}
|
||||||
|
(!x y z.
|
||||||
|
RTC_REL R x y \holAnd{} RTC_REL' x y \holAnd{} RTC_REL R x z \holAnd{}
|
||||||
|
RTC_REL' x z ==>
|
||||||
|
RTC_REL' x z) ==>
|
||||||
|
( !a0 a1. RTC_REL R a0 a1 ==> RTC_REL' a0 a1)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Example: \hol{EVEN}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val (EVEN_REL_rules, EVEN_REL_ind, EVEN_REL_cases) = Hol_reln
|
||||||
|
`(EVEN_REL 0) \holAnd{} (!n. EVEN_REL n ==> (EVEN_REL (n + 2)))`};
|
||||||
|
|
||||||
|
val EVEN_REL_cases =
|
||||||
|
|- !a0. EVEN_REL a0 <=> (a0 = 0) \holOr{} ?n. (a0 = n + 2) \holAnd{} EVEN_REL n
|
||||||
|
|
||||||
|
val EVEN_REL_rules =
|
||||||
|
|- EVEN_REL 0 \holAnd{} !n. EVEN_REL n ==> EVEN_REL (n + 2)
|
||||||
|
|
||||||
|
val EVEN_REL_ind = |- !EVEN_REL'.
|
||||||
|
(!a0.
|
||||||
|
EVEN_REL' a0 ==>
|
||||||
|
(a0 = 0) \holOr{} ?n. (a0 = n + 2) \holAnd{} EVEN_REL' n) ==>
|
||||||
|
(!a0. EVEN_REL' a0 ==> EVEN_REL a0)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\begin{itemize}
|
||||||
|
\item notice that in this example there is exactly one fixpoint
|
||||||
|
\item therefore for these rule, the induction and coinductive relation coincide
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Example: Dummy Relations}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val (DF_rules, DF_ind, DF_cases) = Hol_reln
|
||||||
|
`(!n. DF (n+1) ==> (DF n))`}
|
||||||
|
|
||||||
|
> \hol{val (DT_rules, DT_coind, DT_cases) = Hol_coreln
|
||||||
|
`(!n. DT (n+1) ==> (DT n))`}
|
||||||
|
|
||||||
|
val DT_coind =
|
||||||
|
|- !DT'. (!a0. DT' a0 ==> DT' (a0 + 1)) ==> !a0. DT' a0 ==> DT a0
|
||||||
|
|
||||||
|
val DF_ind =
|
||||||
|
|- !DF'. (!n. DF' (n + 1) ==> DF' n) ==> !a0. DF a0 ==> DF' a0
|
||||||
|
|
||||||
|
val DT_cases = |- !a0. DT a0 <=> DT (a0 + 1):
|
||||||
|
val DF_cases = |- !a0. DF a0 <=> DF (a0 + 1):
|
||||||
|
\end{semiverbatim}
|
||||||
|
\begin{itemize}
|
||||||
|
\item notice that for both \hol{DT} and \hol{DF} we used essentially a non-terminating recursion
|
||||||
|
\item \hol{DT} is always true, \ie \hol{|- !n.\ DT n}
|
||||||
|
\item \hol{DF} is always false, \ie \hol{|- !n.\ \holNeg{}(DF n)}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\section{Quotient Types}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Quotient Types}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \ml{quotientLib} allows to define types as quotients of existing types with respect to \emph{partial equivalence relation}
|
||||||
|
\item each equivalence class becomes a value of the new type
|
||||||
|
\item partiality allows ignoring certain types
|
||||||
|
\item \ml{quotientLib} allows to lift definitions and lemmata as well
|
||||||
|
\item details are technical and won't be presented here
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Quotient Types Example}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item let's assume we have an implementation of finite sets of numbers as
|
||||||
|
binary trees with
|
||||||
|
\begin{itemize}
|
||||||
|
\item type \hol{binset}
|
||||||
|
\item binary tree invariant \hol{WF\_BINSET :\ binset -> bool}
|
||||||
|
\item constant \hol{empty\_binset}
|
||||||
|
\item add and member functions \hol{add :\ num -> binset -> binset},\\ \hol{mem :\ binset -> num -> bool}
|
||||||
|
\end{itemize}
|
||||||
|
\item we can define a partial equivalence relation by\\
|
||||||
|
\hol{binset\_equiv b1 b2 := (\\
|
||||||
|
\-\ \ WF\_BINSET b1 \holAnd{} WF\_BINSET b2 \holAnd{}\\
|
||||||
|
\-\ \ (!n.\ mem b1 n <=> mem b2 n))}
|
||||||
|
\item this allows defining a quotient type of sets of numbers
|
||||||
|
\item functions \hol{empty\_binset}, \hol{add} and \hol{mem} as well as lemmata about them can be lifted automatically
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Quotient Types Summary}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item quotient types are sometimes very useful
|
||||||
|
\begin{itemize}
|
||||||
|
\item \eg rational numbers are defined as a quotient type
|
||||||
|
\end{itemize}
|
||||||
|
\item there is powerful infrastructure for them
|
||||||
|
\item many tasks are automated
|
||||||
|
\item however, the details are technical and won't be discussed here
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\section{Case Expressions}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Pattern Matching / Case Expressions}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item pattern matching ubiquitous in functional programming
|
||||||
|
\item pattern matching is a powerful technique
|
||||||
|
\item it helps to write concise, readable definitions
|
||||||
|
\item very handy and frequently used for interactive theorem proving in higher-order logic (HOL)
|
||||||
|
\item however, it is \alert{not directly supported} by HOL's logic
|
||||||
|
\item representations in HOL
|
||||||
|
\begin{itemize}
|
||||||
|
\item sets of equations as produced by \hol{Define}
|
||||||
|
\item decision trees (printed as case-expressions)
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{TFL / \texttt{Define}}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item we have already used top-level pattern matches with the TFL package
|
||||||
|
\item \hol{Define} is able to handle them
|
||||||
|
\begin{itemize}
|
||||||
|
\item all the semantic complexity is taken care of
|
||||||
|
\item no special syntax or functions remain
|
||||||
|
\item no special rewrite rules, reasoning tools needed afterwards
|
||||||
|
\end{itemize}
|
||||||
|
\item \hol{Define} produces a set of equations
|
||||||
|
\item this is the recommended way of using pattern matching in HOL
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val ZIP_def = Define `(ZIP (x::xs) (y::ys) = (x,y)::(ZIP xs ys)) \holAnd{}
|
||||||
|
(ZIP [] [] = [])`}
|
||||||
|
val ZIP_def = |- (!ys y xs x. ZIP (x::xs) (y::ys) = (x,y)::ZIP xs ys) \holAnd{}
|
||||||
|
(ZIP [] [] = [])
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expressions}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item sometimes one does not want to use this compilation by TFL
|
||||||
|
\begin{itemize}
|
||||||
|
\item one wants to use pattern-matches somewhere nested in a term
|
||||||
|
\item one might not want to introduce a new constant
|
||||||
|
\item one might want to avoid using TFL for technical reasons
|
||||||
|
\end{itemize}
|
||||||
|
\item in such situations, case-expressions can be used
|
||||||
|
\item their syntax is similar to the syntax used by SML
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
> \hol{val ZIP_def = Define `ZIP xs ys = case (xs, ys) of
|
||||||
|
(x::xs, y::ys) => (x,y)::(ZIP xs ys)
|
||||||
|
| ([], []) => []`}
|
||||||
|
val ZIP_def = |- !ys xs. ZIP xs ys =
|
||||||
|
case (xs,ys) of
|
||||||
|
([],[]) => []
|
||||||
|
| ([],v4::v5) => ARB
|
||||||
|
| (x::xs',[]) => ARB
|
||||||
|
| (x::xs',y::ys') => (x,y)::ZIP xs' ys'
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expressions II}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item the datatype package define case-constants for each datatype
|
||||||
|
\item the parser contains a pattern compilation algorithm
|
||||||
|
\item case-expressions are by the parser compiled to decision trees using case-constants
|
||||||
|
\item pretty printer prints these decision trees as case-expressions again
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val ZIP_def = |- !ys xs. ZIP xs ys =
|
||||||
|
pair_CASE (xs,ys)
|
||||||
|
(\textbsl{}v v1.
|
||||||
|
list_CASE v (list_CASE v1 [] (\textbsl{}v4 v5. ARB))
|
||||||
|
(\textbsl{}x xs'. list_CASE v1 ARB (\textbsl{}y ys'. (x,y)::ZIP xs' ys'))):
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Issues}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item using case expressions feels very natural to functional programmers
|
||||||
|
\item case-expressions allow concise, well-readable definitions
|
||||||
|
\item however, there are also many drawbacks
|
||||||
|
\item there is large, complicated code in the parser and pretty printer
|
||||||
|
\begin{itemize}
|
||||||
|
\item this is outside the kernel
|
||||||
|
\item parsing a pretty-printed term can result in a non $\alpha$-equivalent one
|
||||||
|
\item there are bugs in this code (see \eg Issue \#416 reported 8 May 2017)
|
||||||
|
\end{itemize}
|
||||||
|
\item the results are hard to predict
|
||||||
|
\begin{itemize}
|
||||||
|
\item heuristics involved in creating decision tree
|
||||||
|
\item results sometimes hard to predict
|
||||||
|
\item however, it is beneficial that proofs follow this internal, volatile structure
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Issues II}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item technical issues
|
||||||
|
\begin{itemize}
|
||||||
|
\item it is tricky to reason about decision trees
|
||||||
|
\item rewrite rules about case-constants needs to be fetched from \hol{TypeBase}
|
||||||
|
\begin{itemize}
|
||||||
|
\item alternative \hol{srw\_ss} often does more than wanted
|
||||||
|
\end{itemize}
|
||||||
|
\item partially evaluated decision-trees are not pretty printed nicely any more
|
||||||
|
\end{itemize}
|
||||||
|
\item underspecified functions
|
||||||
|
\begin{itemize}
|
||||||
|
\item decision trees are exhaustive
|
||||||
|
\item they list underspecified cases explicitly with value \hol{ARB}
|
||||||
|
\item this can be lengthy
|
||||||
|
\item \hol{Define} in contrast hides underspecified cases
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Example I}
|
||||||
|
|
||||||
|
\begin{block}{Partial Proof Script}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val _ = prove (``!l1 l2.
|
||||||
|
(LENGTH l1 = LENGTH l2) ==>
|
||||||
|
((ZIP l1 l2 = []) <=> ((l1 = []) \holAnd{} (l2 = [])))``,
|
||||||
|
|
||||||
|
ONCE_REWRITE_TAC [ZIP_def]
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
!l1 l2.
|
||||||
|
(LENGTH l1 = LENGTH l2) ==>
|
||||||
|
(((case (l1,l2) of
|
||||||
|
([],[]) => []
|
||||||
|
| ([],v4::v5) => ARB
|
||||||
|
| (x::xs',[]) => ARB
|
||||||
|
| (x::xs',y::ys') => (x,y)::ZIP xs' ys') =
|
||||||
|
[]) <=> (l1 = []) \holAnd{} (l2 = []))
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Example IIa -- partial evaluation}
|
||||||
|
|
||||||
|
\begin{block}{Partial Proof Script}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val _ = prove (``!l1 l2.
|
||||||
|
(LENGTH l1 = LENGTH l2) ==>
|
||||||
|
((ZIP l1 l2 = []) <=> ((l1 = []) \holAnd{} (l2 = [])))``,
|
||||||
|
|
||||||
|
ONCE_REWRITE_TAC [ZIP_def] >>
|
||||||
|
REWRITE_TAC[pairTheory.pair_case_def] >> BETA_TAC
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
!l1 l2.
|
||||||
|
(LENGTH l1 = LENGTH l2) ==>
|
||||||
|
(((case l1 of
|
||||||
|
[] => (case l2 of [] => [] | v4::v5 => ARB)
|
||||||
|
| x::xs' => case l2 of [] => ARB | y::ys' => (x,y)::ZIP xs' ys') =
|
||||||
|
[]) <=> (l1 = []) \holAnd{} (l2 = []))
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Example IIb --- following tree structure}
|
||||||
|
|
||||||
|
\begin{block}{Partial Proof Script}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val _ = prove (``!l1 l2.
|
||||||
|
(LENGTH l1 = LENGTH l2) ==>
|
||||||
|
((ZIP l1 l2 = []) <=> ((l1 = []) \holAnd{} (l2 = [])))``,
|
||||||
|
|
||||||
|
ONCE_REWRITE_TAC [ZIP_def] >>
|
||||||
|
Cases_on `l1` >| [
|
||||||
|
REWRITE_TAC[listTheory.list_case_def]
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{block}{Current Goal}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
!l2.
|
||||||
|
(LENGTH [] = LENGTH l2) ==>
|
||||||
|
(((case ([],l2) of
|
||||||
|
([],[]) => []
|
||||||
|
| ([],v4::v5) => ARB
|
||||||
|
| (x::xs',[]) => ARB
|
||||||
|
| (x::xs',y::ys') => (x,y)::ZIP xs' ys') =
|
||||||
|
[]) <=> (l2 = []))
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Case Expression Summary}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item case expressions are natural to functional programmers
|
||||||
|
\item they allow concise, readable definitions
|
||||||
|
\item however, fancy parser and pretty-printer needed
|
||||||
|
\begin{itemize}
|
||||||
|
\item trustworthiness issues
|
||||||
|
\item sanity check lemmata advisable
|
||||||
|
\end{itemize}
|
||||||
|
\item reasoning about case expressions can be tricky and lengthy
|
||||||
|
\item proofs about case expression often hard to maintain
|
||||||
|
\item therefore, use top-level pattern matching via \hol{Define} if easily possible
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
543
lectures/15_maintainable_proofs.tex
Normal file
543
lectures/15_maintainable_proofs.tex
Normal file
@ -0,0 +1,543 @@
|
|||||||
|
\part{Maintainable Proofs}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Motivation}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item proofs are hopefully still used in a few weeks, months or even years
|
||||||
|
\item often the environment changes slightly during the lifetime of a proof
|
||||||
|
\begin{itemize}
|
||||||
|
\item your definitions change slightly
|
||||||
|
\item your own lemmata change (\eg become more general)
|
||||||
|
\item used libraries change
|
||||||
|
\item HOL changes
|
||||||
|
\begin{itemize}
|
||||||
|
\item automation becomes more powerful
|
||||||
|
\item rewrite rules in certain simpsets change
|
||||||
|
\item definition packages produce slightly different theorems
|
||||||
|
\item autogenerated variable-names change
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\item even if HOL and used libraries are stable, proofs often go through several iterations
|
||||||
|
\item often they are adapted by someone else than the original author
|
||||||
|
\item \alert{therefore it is important that proofs are easily maintainable}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Nice Properties of Proofs}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item maintainability is closely linked to other desirable properties of proofs
|
||||||
|
\item proofs should be
|
||||||
|
\begin{itemize}
|
||||||
|
\item easily understandable
|
||||||
|
\item well-structured
|
||||||
|
\item robust
|
||||||
|
\begin{itemize}
|
||||||
|
\item they should be able to cope with minor changes to environment
|
||||||
|
\item if they fail they should do so at sensible points
|
||||||
|
\end{itemize}
|
||||||
|
\item reusable
|
||||||
|
\end{itemize}
|
||||||
|
\item How can one write proofs with such properties?
|
||||||
|
\item as usual, there are no easy answers but plenty of good advice
|
||||||
|
\item I recommend following the advice of \emph{ProofStyle} manual
|
||||||
|
\item parts of this advice as well as a few extra points are discussed in the following
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Formatting}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item format your proof such that it easily understandable
|
||||||
|
\item make the structure of the proof very clear
|
||||||
|
\item \alert{show clearly where subgoals start and stop}
|
||||||
|
\item use indentation to mark proofs of subgoals
|
||||||
|
\item use empty lines to separate large proofs of subgoals
|
||||||
|
\item use comments where appropriate
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Formatting Example I}
|
||||||
|
|
||||||
|
\begin{alertblock}{Bad Example Term Formatting}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
prove (``!l1 l2. l1 <> [] ==> LENGTH l2 <
|
||||||
|
LENGTH (l1 ++ l2)``,
|
||||||
|
...)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{alertblock}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Good Example Term Formatting}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
prove (``!l1 l2. l1 <> [] ==>
|
||||||
|
(LENGTH l2 < LENGTH (l1 ++ l2))``,
|
||||||
|
...)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Formatting Example II}
|
||||||
|
|
||||||
|
\begin{alertblock}{Bad Example Subgoals}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
prove (``!l1 l2. l1 <> [] ==> (LENGTH l2 < LENGTH (l1 ++ l2))``,
|
||||||
|
Cases >>
|
||||||
|
REWRITE_TAC[] >>
|
||||||
|
REWRITE_TAC[listTheory.LENGTH, listTheory.LENGTH_APPEND] >>
|
||||||
|
REPEAT STRIP_TAC >>
|
||||||
|
DECIDE_TAC)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{alertblock}
|
||||||
|
|
||||||
|
\begin{alertblock}{Improved Example Subgoals}
|
||||||
|
At least show when a subgoal starts and ends
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
prove (``!l1 l2. l1 <> [] ==> (LENGTH l2 < LENGTH (l1 ++ l2))``,
|
||||||
|
Cases >> (
|
||||||
|
REWRITE_TAC[]
|
||||||
|
) >>
|
||||||
|
REWRITE_TAC[listTheory.LENGTH, listTheory.LENGTH_APPEND] >>
|
||||||
|
REPEAT STRIP_TAC >>
|
||||||
|
DECIDE_TAC)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{alertblock}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Formatting Example II 2}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Good Example Subgoals}
|
||||||
|
Make sure \texttt{REWRITE\_TAC} is only applied to first subgoal and
|
||||||
|
proof fails, if it does not solve this subgoal.
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
prove (``!l1 l2. l1 <> [] ==> (LENGTH l2 < LENGTH (l1 ++ l2))``,
|
||||||
|
Cases >- (
|
||||||
|
REWRITE_TAC[]
|
||||||
|
) >>
|
||||||
|
REWRITE_TAC[listTheory.LENGTH, listTheory.LENGTH_APPEND] >>
|
||||||
|
REPEAT STRIP_TAC >>
|
||||||
|
DECIDE_TAC)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Formatting Example II 3}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Alternative Good Example Subgoals}
|
||||||
|
Alternative good formatting using \texttt{THENL}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
prove (``!l1 l2. l1 <> [] ==> (LENGTH l2 < LENGTH (l1 ++ l2))``,
|
||||||
|
Cases >| [
|
||||||
|
REWRITE_TAC[],
|
||||||
|
|
||||||
|
REWRITE_TAC[listTheory.LENGTH, listTheory.LENGTH_APPEND] >>
|
||||||
|
REPEAT STRIP_TAC >>
|
||||||
|
DECIDE_TAC
|
||||||
|
])
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
|
||||||
|
\begin{alertblock}{Another Bad Example Subgoals}
|
||||||
|
Bad formatting using \texttt{THENL}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
prove (``!l1 l2. l1 <> [] ==> (LENGTH l2 < LENGTH (l1 ++ l2))``,
|
||||||
|
Cases >| [REWRITE_TAC[],
|
||||||
|
REWRITE_TAC[listTheory.LENGTH, listTheory.LENGTH_APPEND] >>
|
||||||
|
REPEAT STRIP_TAC >> DECIDE_TAC])
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{alertblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Some basic advice}
|
||||||
|
\begin{itemize}
|
||||||
|
\item use semicoli after each declaration
|
||||||
|
\begin{itemize}
|
||||||
|
\item if exception is raised during interactive processing (\eg by a failing proof), previous successful declarations are kept
|
||||||
|
\item it sometimes leads to better error messages in case of parsing errors
|
||||||
|
\end{itemize}
|
||||||
|
\item use plenty of parentheses to make structure very clear
|
||||||
|
\item don't ignore parser warnings
|
||||||
|
\begin{itemize}
|
||||||
|
\item especially warnings about multiple possible parse trees are likely to lead to unstable proofs
|
||||||
|
\item understand why such warnings occur and make sure there is no problem
|
||||||
|
\end{itemize}
|
||||||
|
\item format your development well
|
||||||
|
\begin{itemize}
|
||||||
|
\item use indentation
|
||||||
|
\item use linebreaks at sensible points
|
||||||
|
\item don't use overlong lines
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\item don't use \ml{open} in middle of files
|
||||||
|
\item personal opinion: avoid unicode in source files
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{KISS and Premature Optimisation}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item follow standard design principles
|
||||||
|
\begin{itemize}
|
||||||
|
\item \emph{KISS} principle
|
||||||
|
\item ``\emph{premature optimization is the root of all evil}'' (Donald Knuth)
|
||||||
|
\end{itemize}
|
||||||
|
\item don't try to be overly clever
|
||||||
|
\item simple proofs are preferable
|
||||||
|
\item proof-checking-speed mostly unimportant
|
||||||
|
\item conciseness not a value in itself but desirable if it helps
|
||||||
|
\begin{itemize}
|
||||||
|
\item readability
|
||||||
|
\item maintainability
|
||||||
|
\end{itemize}
|
||||||
|
\item abstraction is often desirable, but also has a price
|
||||||
|
\begin{itemize}
|
||||||
|
\item don't use too complex, artificial definitions and lemmata
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Too much abstraction}
|
||||||
|
|
||||||
|
\begin{alertblock}{Too much abstraction Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val TOO_ABSTRACT_LEMMA = prove (``
|
||||||
|
!(size :'a -> num) (P : 'a -> bool) (combine : 'a -> 'a -> 'a).
|
||||||
|
(!x. P x ==> (0 < size x)) \holAnd{}
|
||||||
|
(!x1 x2. size x1 + size x2 <= size (combine x1 x2)) ==>
|
||||||
|
|
||||||
|
(!x1 x2. P x1 ==> (size x2 < size (combine x1 x2)))``,
|
||||||
|
...)
|
||||||
|
|
||||||
|
|
||||||
|
prove (``!l1 l2. l1 <> [] ==> (LENGTH l2 < LENGTH (l1 ++ l2))``,
|
||||||
|
\textrm{some proof using} ABSTRACT_LEMMA
|
||||||
|
)
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{alertblock}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Too clever tactics}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item a common mistake is to use too clever tactics
|
||||||
|
\begin{itemize}
|
||||||
|
\item intended to work on many (sub)goals
|
||||||
|
\item using \hol{TRY} and other fancy trial and error mechanisms
|
||||||
|
\item intended to replace multiple simple, clear tactics
|
||||||
|
\end{itemize}
|
||||||
|
\item typical case: a tactic containing \hol{TRY} applied to many subgoals
|
||||||
|
\item it is often hard to see why such tactics work
|
||||||
|
\item if something goes wrong, they are hard to debug
|
||||||
|
\item general advice: don't factor with tactics, instead use definitions and lemmata
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Too Clever Tactics Example I}
|
||||||
|
|
||||||
|
\begin{alertblock}{Bad Example Subgoals}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
prove (``!l1 l2. l1 <> [] ==> (LENGTH l2 < LENGTH (l1 ++ l2))``,
|
||||||
|
Cases >> (
|
||||||
|
REWRITE_TAC[listTheory.LENGTH, listTheory.LENGTH_APPEND] >>
|
||||||
|
REPEAT STRIP_TAC >>
|
||||||
|
DECIDE_TAC
|
||||||
|
))
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{alertblock}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Alternative Good Example Subgoals II}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
prove (``!l1 l2. l1 <> [] ==> (LENGTH l2 < LENGTH (l1 ++ l2))``,
|
||||||
|
Cases >> SIMP_TAC list_ss [])
|
||||||
|
|
||||||
|
prove (``!l1 l2. l1 <> [] ==> (LENGTH l2 < LENGTH (l1 ++ l2))``,
|
||||||
|
Cases >| [
|
||||||
|
REWRITE_TAC[],
|
||||||
|
|
||||||
|
REWRITE_TAC[listTheory.LENGTH, listTheory.LENGTH_APPEND] >>
|
||||||
|
REPEAT STRIP_TAC >>
|
||||||
|
DECIDE_TAC
|
||||||
|
])
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Too Clever Tactics Example II}
|
||||||
|
|
||||||
|
\begin{alertblock}{Bad Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val oadd_def = Define `(oadd (SOME n1) (SOME n2) = (SOME (n1 + n2))) \holAnd{}
|
||||||
|
(oadd _ _ = NONE)`;
|
||||||
|
val osub_def = Define `(osub (SOME n1) (SOME n2) = (SOME (n1 - n2))) \holAnd{}
|
||||||
|
(osub _ _ = NONE)`;
|
||||||
|
val omul_def = Define `(omul (SOME n1) (SOME n2) = (SOME (n1 * n2))) \holAnd{}
|
||||||
|
(omul _ _ = NONE)`;
|
||||||
|
|
||||||
|
val obin_NONE_TAC =
|
||||||
|
Cases_on `o1` >> Cases_on `o2` >>
|
||||||
|
SIMP_TAC std_ss [oadd_def, osub_def, omul_def];
|
||||||
|
|
||||||
|
val oadd_NONE = prove (
|
||||||
|
``!o1 o2. (oadd o1 o2 = NONE) <=> (o1 = NONE) \holOr{} (o2 = NONE)``,
|
||||||
|
obin_NONE_TAC);
|
||||||
|
val osub_NONE = prove (
|
||||||
|
``!o1 o2. (osub o1 o2 = NONE) <=> (o1 = NONE) \holOr{} (o2 = NONE)``,
|
||||||
|
obin_NONE_TAC);
|
||||||
|
val omul_NONE = prove (
|
||||||
|
``!o1 o2. (omul o1 o2 = NONE) <=> (o1 = NONE) \holOr{} (o2 = NONE)``,
|
||||||
|
obin_NONE_TAC);
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{alertblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Too Clever Tactics Example II}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Good Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val obin_def = Define `(obin op (SOME n1) (SOME n2) = (SOME (op n1 n2))) \holAnd{}
|
||||||
|
(obin _ _ _ = NONE)`;
|
||||||
|
val oadd_def = Define `oadd = obin \$+`;
|
||||||
|
val osub_def = Define `osub = obin \$-`;
|
||||||
|
val omul_def = Define `omul = obin \$*`;
|
||||||
|
|
||||||
|
val obin_NONE = prove (
|
||||||
|
``!op o1 o2. (obin op o1 o2 = NONE) <=> (o1 = NONE) \holOr{} (o2 = NONE)``,
|
||||||
|
Cases_on `o1` >> Cases_on `o2` >> SIMP_TAC std_ss [obin_def]);
|
||||||
|
|
||||||
|
val oadd_NONE = prove (
|
||||||
|
``!o1 o2. (oadd o1 o2 = NONE) <=> (o1 = NONE) \holOr{} (o2 = NONE)``,
|
||||||
|
REWRITE_TAC[oadd_def, obin_NONE]);
|
||||||
|
val osub_NONE = prove (
|
||||||
|
``!o1 o2. (osub o1 o2 = NONE) <=> (o1 = NONE) \holOr{} (o2 = NONE)``,
|
||||||
|
REWRITE_TAC[osub_def, obin_NONE]);
|
||||||
|
val omul_NONE = prove (
|
||||||
|
``!o1 o2. (omul o1 o2 = NONE) <=> (o1 = NONE) \holOr{} (o2 = NONE)``,
|
||||||
|
REWRITE_TAC[omul_def, obin_NONE]);
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Use many subgoals and lemmata}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item often it is beneficial to use subgoals
|
||||||
|
\begin{itemize}
|
||||||
|
\item they structure long proofs well
|
||||||
|
\item they help keeping the proof state clean
|
||||||
|
\item they mark clearly what one tries to proof
|
||||||
|
\item they provide points where proofs can break sensibly
|
||||||
|
\end{itemize}
|
||||||
|
\item general enough subgoals should become lemmata
|
||||||
|
\begin{itemize}
|
||||||
|
\item this improves reusability
|
||||||
|
\item proof script for main lemma becomes shorter
|
||||||
|
\item proofs are disentangled
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Subgoal Example}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item the following example is taken from exercise 5
|
||||||
|
\item we try to prove \hol{\texttt{!l.\ IS\_WEAK\_SUBLIST\_FILTER l l}}
|
||||||
|
\item given are following definitions and lemmata
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{block}{}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val FILTER_BY_BOOLS_def = Define `
|
||||||
|
FILTER_BY_BOOLS bl l = MAP SND (FILTER FST (ZIP (bl, l)))`;
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_def = Define `IS_WEAK_SUBLIST_FILTER l1 l2 =
|
||||||
|
?(bl : bool list). (LENGTH bl = LENGTH l1) \holAnd{} (l2 = FILTER_BY_BOOLS bl l1)`;
|
||||||
|
|
||||||
|
val FILTER_BY_BOOLS_REWRITES = store_thm ("FILTER_BY_BOOLS_REWRITES",
|
||||||
|
``(FILTER_BY_BOOLS [] [] = []) \holAnd{}
|
||||||
|
(!b bl x xs. (FILTER_BY_BOOLS (b::bl) (x::xs) =
|
||||||
|
if b then x::(FILTER_BY_BOOLS bl xs) else FILTER_BY_BOOLS bl xs))``,
|
||||||
|
REWRITE_TAC [FILTER_BY_BOOLS_def, ZIP, MAP, FILTER] >>
|
||||||
|
Cases_on `b` >> REWRITE_TAC [MAP]);
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Subgoal Example II}
|
||||||
|
|
||||||
|
\begin{alertblock}{First Version}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_REFL = store_thm ("IS_WEAK_SUBLIST_FILTER_REFL",
|
||||||
|
``!l. IS_WEAK_SUBLIST_FILTER l l``,
|
||||||
|
REWRITE_TAC[IS_WEAK_SUBLIST_FILTER_def] >>
|
||||||
|
Induct_on `l` >- (
|
||||||
|
Q.EXISTS_TAC `[]` >>
|
||||||
|
SIMP_TAC list_ss [FILTER_BY_BOOLS_REWRITES]
|
||||||
|
) >>
|
||||||
|
FULL_SIMP_TAC std_ss [] >>
|
||||||
|
GEN_TAC >>
|
||||||
|
Q.EXISTS_TAC `T::bl` >>
|
||||||
|
ASM_SIMP_TAC list_ss [FILTER_BY_BOOLS_REWRITES])
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{alertblock}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item the proof mixes properties of \texttt{IS\_WEAK\_SUBLIST\_FILTER} and
|
||||||
|
properties of \texttt{FILTER\_BY\_BOOLS}
|
||||||
|
\item it is hard to see what the main idea is
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Subgoal Example III}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item the following proof separates the property of \texttt{FILTER\_BY\_BOOLS} as a subgoal
|
||||||
|
\item the main idea becomes clearer
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Subgoal Version}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_REFL = store_thm ("IS_WEAK_SUBLIST_FILTER_REFL",
|
||||||
|
``!l. IS_WEAK_SUBLIST_FILTER l l``,
|
||||||
|
GEN_TAC >>
|
||||||
|
REWRITE_TAC[IS_WEAK_SUBLIST_FILTER_def] >>
|
||||||
|
`FILTER_BY_BOOLS (REPLICATE (LENGTH l) T) l = l` suffices_by (
|
||||||
|
METIS_TAC[LENGTH_REPLICATE]
|
||||||
|
) >>
|
||||||
|
Induct_on `l` >> (
|
||||||
|
ASM_SIMP_TAC list_ss [FILTER_BY_BOOLS_REWRITES, REPLICATE]
|
||||||
|
))
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Subgoal Example IV}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item the subgoal is general enough to justify a lemma
|
||||||
|
\item the structure becomes even cleaner
|
||||||
|
\item this improves reusability
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Lemma Version}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
val FILTER_BY_BOOLS_REPL_T = store_thm ("FILTER_BY_BOOLS_REPL_T",
|
||||||
|
``!l. FILTER_BY_BOOLS (REPLICATE (LENGTH l) T) l = l``,
|
||||||
|
Induct >> ASM_REWRITE_TAC [REPLICATE, FILTER_BY_BOOLS_REWRITES, LENGTH]);
|
||||||
|
|
||||||
|
val IS_WEAK_SUBLIST_FILTER_REFL = store_thm ("IS_WEAK_SUBLIST_FILTER_REFL",
|
||||||
|
``!l. IS_WEAK_SUBLIST_FILTER l l``,
|
||||||
|
GEN_TAC >>
|
||||||
|
REWRITE_TAC[IS_WEAK_SUBLIST_FILTER_def] >>
|
||||||
|
Q.EXISTS_TAC `REPLICATE (LENGTH l) T` >>
|
||||||
|
SIMP_TAC list_ss [FILTER_BY_BOOLS_REPL_T, LENGTH_REPLICATE])
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Avoid Autogenerated Names}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item many HOL-tactics introduce new variable names
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{Induct}
|
||||||
|
\item \hol{Cases}
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\item the new names are often very artificial
|
||||||
|
\item even worse, generated names might change in future
|
||||||
|
\item proof scripts using autogenerated names are therefore
|
||||||
|
\begin{itemize}
|
||||||
|
\item hard to read
|
||||||
|
\item potentially fragile
|
||||||
|
\end{itemize}
|
||||||
|
\item therefore rename variables after they have been introduced
|
||||||
|
\item HOL has multiple tactics supporting renaming
|
||||||
|
\item most useful is \hol{rename1 `pat`}, it searches for pattern and renames vars accordingly
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}[fragile]
|
||||||
|
\frametitle{Autogenerated Names Example}
|
||||||
|
|
||||||
|
\begin{alertblock}{Bad Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
prove (``!l. 1 < LENGTH l ==> (?x1 x2 l'. l = x1::x2::l')``,
|
||||||
|
GEN_TAC >>
|
||||||
|
Cases_on `l` >> SIMP_TAC list_ss [] >>
|
||||||
|
Cases_on `t` >> SIMP_TAC list_ss [])
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{alertblock}
|
||||||
|
|
||||||
|
\begin{exampleblock}{Good Example}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
prove (``!l. 1 < LENGTH l ==> (?x1 x2 l'. l = x1::x2::l')``,
|
||||||
|
GEN_TAC >>
|
||||||
|
Cases_on `l` >> SIMP_TAC list_ss [] >>
|
||||||
|
rename1 `LENGTH l2` >>
|
||||||
|
Cases_on `l2` >> SIMP_TAC list_ss [])
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{exampleblock}
|
||||||
|
|
||||||
|
\begin{block}{Proof State before \hol{rename1}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
1 < SUC (LENGTH t) ==> ?x2 l'. t = x2::l'
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\begin{block}{Proof State after \hol{rename1}}
|
||||||
|
\begin{semiverbatim}\scriptsize
|
||||||
|
1 < SUC (LENGTH l2) ==> ?x2 l'. l2 = x2::l'
|
||||||
|
\end{semiverbatim}
|
||||||
|
\end{block}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
226
lectures/16_hol_overview.tex
Normal file
226
lectures/16_hol_overview.tex
Normal file
@ -0,0 +1,226 @@
|
|||||||
|
\part{Overview of HOL~4}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Overview of HOL 4}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item in this course we discussed the basics of HOL 4
|
||||||
|
\item you were encouraged to learn more on your own in exercises
|
||||||
|
\item there is a lot more to learn even after the end of the course
|
||||||
|
\begin{itemize}
|
||||||
|
\item many more libraries
|
||||||
|
\item proof tools
|
||||||
|
\item existing formalisations
|
||||||
|
\item ...
|
||||||
|
\end{itemize}
|
||||||
|
\item to really use HOL well, you should continue learning
|
||||||
|
\item to help getting started, a short overview is presented here
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Bare Source Directories}
|
||||||
|
|
||||||
|
The following source directories are the very basis of HOL. They
|
||||||
|
are required to build \hol{hol.bare}.
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{src/portableML} -- common stuff for PolyML and MoscowML
|
||||||
|
\item \hol{src/prekernel}
|
||||||
|
\item \hol{src/0} -- Standard Kernel
|
||||||
|
\item \hol{src/logging-kernel} -- Logging Kernel
|
||||||
|
\item \hol{src/experimental-kernel} -- Experimental Kernel
|
||||||
|
\item \hol{src/postkernel}
|
||||||
|
\item \hol{src/opentheory}
|
||||||
|
\item \hol{src/parse}
|
||||||
|
\item \hol{src/bool}
|
||||||
|
\item \hol{src/1}
|
||||||
|
\item \hol{src/proofman}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Basic Directories I}
|
||||||
|
|
||||||
|
On top of \texttt{hol.bare}, there are many basic theories and tools. These
|
||||||
|
are all required for building the main \texttt{hol} executable.
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{src/compute} -- fast ground term rewriting
|
||||||
|
\item \hol{src/HolSat} -- SAT solver interfaces
|
||||||
|
\item \hol{src/taut} -- propositional proofs using \texttt{HolSat}
|
||||||
|
\item \hol{src/marker} -- marking terms
|
||||||
|
\item \hol{src/q} -- parsing support
|
||||||
|
\item \hol{src/combin} -- combinators
|
||||||
|
\item \hol{src/lite} -- some simple lib with various stuff
|
||||||
|
\item \hol{src/refute} -- refutation prover, normal forms
|
||||||
|
\item \hol{src/metis} -- first order resolution prover
|
||||||
|
\item \hol{src/meson} -- first order model elimination prover
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Basic Directories II}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{src/simp} -- simplifier
|
||||||
|
\item \hol{src/holyhammer} -- tool for finding Metis proofs
|
||||||
|
\item \hol{src/tactictoe} -- machine learning tool for finding proofs
|
||||||
|
\item \hol{src/IndDef} -- (co)inductive relation definitions
|
||||||
|
\item \hol{src/basicProof} -- library containing proof tools
|
||||||
|
\item \hol{src/relation} -- relations and order theory
|
||||||
|
\item \hol{src/one} -- unit type theory
|
||||||
|
\item \hol{src/pair} -- tuples
|
||||||
|
\item \hol{src/sum} -- sum types
|
||||||
|
\item \hol{src/tfl} -- defining terminating functions
|
||||||
|
\item \hol{src/option} -- option types
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Basic Directories III}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{src/num} -- numbers and arithmetic
|
||||||
|
\item \hol{src/pred\_set} -- predicate sets
|
||||||
|
\item \hol{src/datatype} -- Datatype package
|
||||||
|
\item \hol{src/list} -- list theories
|
||||||
|
\item \hol{src/monad} -- monads
|
||||||
|
\item \hol{src/quantHeuristics} -- instantiating quantifiers
|
||||||
|
\item \hol{src/unwind} -- lib for unwinding structural hardware definitions
|
||||||
|
\item \hol{src/pattern\_matches} -- pattern matches alternative
|
||||||
|
\item \hol{src/bossLib} -- main HOL lib loaded at start
|
||||||
|
\end{itemize}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\hol{bossLib} is one central library. It loads all basic theories and libraries and
|
||||||
|
provides convenient wrappers for the most common tools.
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL More Theories I}
|
||||||
|
|
||||||
|
Besides the basic libraries and theories that are required and loaded by \ml{hol}, there
|
||||||
|
are many more developements in HOL's source directory.
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{src/sort} -- sorting lists
|
||||||
|
\item \hol{src/string} -- strings
|
||||||
|
\item \hol{src/TeX} -- exporting LaTeX code
|
||||||
|
\item \hol{src/res\_quan} -- restricted quantifiers
|
||||||
|
\item \hol{src/quotient} -- quotient type package
|
||||||
|
\item \hol{src/finite\_map} -- finite map theory
|
||||||
|
\item \hol{src/bag} -- bags \aka multisets
|
||||||
|
\item \hol{src/n-bit} -- machine words
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL More Theories II}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{src/ring} -- reasoning about rings
|
||||||
|
\item \hol{src/integer} -- integers
|
||||||
|
\item \hol{src/llists} -- lazy lists
|
||||||
|
\item \hol{src/path} -- finite and infinite paths through a transition system
|
||||||
|
\item \hol{src/patricia} -- efficient finite map implementations using trees
|
||||||
|
\item \hol{src/emit} -- emitting SML and OCaml code
|
||||||
|
\item \hol{src/search} -- traversal of graphs that may contain cycles
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL More Theories III}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{src/rational} -- rational numbers
|
||||||
|
\item \hol{src/real} -- real numbers
|
||||||
|
\item \hol{src/complex} -- comples numbers
|
||||||
|
\item \hol{src/HolQbf} -- quantified boolean formulas
|
||||||
|
\item \hol{src/HolSmt} -- support for external SMT solvers
|
||||||
|
\item \hol{src/float} -- IEEE floating point numbers
|
||||||
|
\item \hol{src/floating-point} -- new version of IEEE floating point numbers
|
||||||
|
\item \hol{src/probability} -- some propability theory
|
||||||
|
\item \hol{src/temporal} -- shallow embedding of temporal logic
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Selected Examples I}
|
||||||
|
|
||||||
|
The directory examples hosts many theories and libraries as well. There is not
|
||||||
|
always a clear distinction between an example and a development in \ml{src}. However,
|
||||||
|
in general examples are more specialised and often larger. They are not required to
|
||||||
|
follow HOL's coding style as much as developments in \ml{src}.
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{examples/balanced\_bst} -- finite maps via balanced trees
|
||||||
|
\item \hol{examples/unification} -- (nominal) unification
|
||||||
|
\item \hol{examples/Crypto} -- various block ciphers
|
||||||
|
\item \hol{examples/elliptic} -- elliptic curve cryptography
|
||||||
|
\item \hol{examples/formal-languages} -- regular and context free formal languages
|
||||||
|
\item \hol{examples/computability} -- basic computability theory
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Selected Examples II}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{examples/set-theory} -- axiomatic formalisation of set theory
|
||||||
|
\item \hol{examples/lambda} -- lambda calculus
|
||||||
|
\item \hol{examples/acl2} -- connection to ACL2 prover
|
||||||
|
\item \hol{examples/theorem-prover} -- soundness proof of Milawa prover
|
||||||
|
\item \hol{examples/PSL} -- formalisation of PSL
|
||||||
|
\item \hol{examples/HolBdd} -- Binary Decision Diagrams
|
||||||
|
\item \hol{examples/HolCheck} -- basic model checker
|
||||||
|
\item \hol{examples/temporal\_deep} -- deep embedding of temporal logics and automata
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Selected Examples III}
|
||||||
|
\begin{itemize}
|
||||||
|
\item \hol{examples/pgcl} formalisation of pGCL (the Probabilistic Guarded Command Language)
|
||||||
|
\item \hol{examples/dev} -- some hardware compilation
|
||||||
|
\item \hol{examples/STE} -- symbolic trajectory evalutation
|
||||||
|
\item \hol{examples/separationLogic} -- formalisation of separation logic
|
||||||
|
\item \hol{examples/ARM} -- formalisation of ARM architecture
|
||||||
|
\item \hol{examples/l3-machine-code} -- l3 language
|
||||||
|
\item \hol{examples/machine-code} -- compilers and decompilers to machine-code
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Concluding Remarks}
|
||||||
|
\begin{itemize}
|
||||||
|
\item some useful tools are a bit hidden in the HOL sources
|
||||||
|
\item moreover there are developments outside the main HOL 4 sources
|
||||||
|
\begin{itemize}
|
||||||
|
\item CakeML \emph{\url{https://cakeml.org}}
|
||||||
|
\end{itemize}
|
||||||
|
\item keep in touch with community to continue learning about HOL 4
|
||||||
|
\begin{itemize}
|
||||||
|
\item mailing-list \ml{hol-info}
|
||||||
|
\item GitHub \emph{\url{https://github.com/HOL-Theorem-Prover/HOL}}
|
||||||
|
\item \emph{\url{https://hol-theorem-prover.org}}
|
||||||
|
\end{itemize}
|
||||||
|
\item if you continue using HOL, please consider sharing your work with the community
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
266
lectures/17_other_provers.tex
Normal file
266
lectures/17_other_provers.tex
Normal file
@ -0,0 +1,266 @@
|
|||||||
|
\part{Other Interactive Theorem Provers}
|
||||||
|
|
||||||
|
\frame[plain]{\partpage}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Other Interactive Theorem Provers}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item at the beginning we very briefly discussed other theorem provers
|
||||||
|
\item now, with more knowledge about HOL 4 we can discuss other provers and their differences to HOL 4 in more detail
|
||||||
|
\item HOL 4 is a good system
|
||||||
|
\item it is very well suited for the tasks required by the PROSPER project
|
||||||
|
\item however, as always \emph{choose the right tool for your task}
|
||||||
|
\item you might find a different prover more suitable for your needs
|
||||||
|
\item hopefully this course has enabled you to learn to use other provers on your own without much trouble
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{HOL 4}
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL 4}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item based on classical higher order logic
|
||||||
|
\item logic is sweet spot between expressivity and automation
|
||||||
|
\pro very trustworthy thanks to LCF approach
|
||||||
|
\pro simple enough to understand easily
|
||||||
|
\pro very easy to write custom proof tools, \ie own automation
|
||||||
|
\pro reasonably fast and efficient
|
||||||
|
\item decent automation
|
||||||
|
\con no user-interface
|
||||||
|
\con no special proof language
|
||||||
|
\con no IDE, very little editor support
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{HOL Omega}
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Omega}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item mainly developed by Peter Homeier \emph{\url{http://www.trustworthytools.com/}}
|
||||||
|
\item extension of HOL 4
|
||||||
|
\begin{itemize}
|
||||||
|
\pro logic extended by kinds
|
||||||
|
\pro allows type operator variables
|
||||||
|
\pro allows quantification over type variables
|
||||||
|
\end{itemize}
|
||||||
|
\pro sometimes handy to \eg model category theory
|
||||||
|
\con not very actively developed
|
||||||
|
\con HOL 4 usually sufficient and better supported
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{HOL Light}
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{HOL Light}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item mainly developed by John Harrison
|
||||||
|
\item \emph{\url{https://github.com/jrh13/hol-light}}
|
||||||
|
\item cleanup and reimplementation of HOL in OCaml
|
||||||
|
\item little legacy code
|
||||||
|
\item however, still very similar to HOL 4
|
||||||
|
\pro much better automation for real analysis
|
||||||
|
\pro cleaner
|
||||||
|
\con OCaml introduces some minor issues with trustworthiness
|
||||||
|
\con some other libs and tools of HOL 4 are missing
|
||||||
|
\con HOL 4 has bigger community
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Isabelle}
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Isabelle}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item Isabelle is also a descendant of LCF
|
||||||
|
\item originally developed by Larry Paulson in Cambridge\\
|
||||||
|
\emph{\url{https://www.cl.cam.ac.uk/research/hvg/Isabelle/}}
|
||||||
|
\item meanwhile also developed at TU Munich by Tobias Nipkow
|
||||||
|
\emph{\url{http://www21.in.tum.de}}
|
||||||
|
\item huge contributions by Markarius Wenzel\\
|
||||||
|
\emph{\url{http://sketis.net}}
|
||||||
|
\item Isabelle is a generic theorem prover
|
||||||
|
\item most used instantiation is Isabelle/HOL
|
||||||
|
\item other important one is Isabelle/ZF
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Isabelle / HOL - Logic}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item logic of Isabelle / HOL very similar to HOL's logic
|
||||||
|
\begin{itemize}
|
||||||
|
\item meta logic leads to meta level quantification and object level quantification
|
||||||
|
\pro type classes
|
||||||
|
\pro powerful module system
|
||||||
|
\pro existential variables
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\item Isabelle is implemented using the LCF approach
|
||||||
|
\item it uses SML (Poly/ML)
|
||||||
|
\item many original tools (\eg simplifier) similar to HOL
|
||||||
|
\item focused as HOL on equational reasoning
|
||||||
|
\item many tools are exchanged between HOL 4 and Isabelle / HOL
|
||||||
|
\begin{itemize}
|
||||||
|
\item Metis
|
||||||
|
\item Sledgehammer
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Isabelle / HOL - Engineering}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\pro a lot of engineering went into Isabelle/HOL
|
||||||
|
\pro it has a very nice GUI
|
||||||
|
\begin{itemize}
|
||||||
|
\item IDE based on JEdit
|
||||||
|
\item special language for proofs (Isar)
|
||||||
|
\item good error messages
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\pro very good automation
|
||||||
|
\pro efficient implementations
|
||||||
|
\pro many libraries (Archive of Formal Proof)
|
||||||
|
\pro excellent code extraction
|
||||||
|
\pro good documentation
|
||||||
|
\pro easy for new users
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Isabelle / HOL - Isar}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item special proof language Isar used
|
||||||
|
\item this allows to write \emph{declarative proofs}
|
||||||
|
\begin{itemize}
|
||||||
|
\item very high level
|
||||||
|
\item easy to read by humans
|
||||||
|
\item very robust
|
||||||
|
\item very good tool support
|
||||||
|
\item \ldots
|
||||||
|
\end{itemize}
|
||||||
|
\con however, tactical proofs are not easily accessible any more
|
||||||
|
\begin{itemize}
|
||||||
|
\item many intermediate goals need to be stated (declared) explicitly
|
||||||
|
\item this can be very tedious
|
||||||
|
\item tools like verification condition generators are hard to use
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Isabelle / HOL - Drawbacks}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\pro Isabelle/HOL provides excellent out of the box automation
|
||||||
|
\pro it provides a very nice user interface
|
||||||
|
\pro it is very nice for new users
|
||||||
|
\con however, this comes at a price
|
||||||
|
\begin{itemize}
|
||||||
|
\item multiple layers added between kernel and user
|
||||||
|
\item hard to understand all these layers
|
||||||
|
\item a lot of knowledge is needed to write your own automation
|
||||||
|
\end{itemize}
|
||||||
|
\con hard to write own automation
|
||||||
|
\con Isabelle/HOL due to focus on declarative proofs not well suited for \eg PROSPER
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\section{Coq}
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Coq}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item Coq is a proof assistant using the Calculus of Inductive Constructions
|
||||||
|
\item inspired by HOL 88
|
||||||
|
\item backward proofs as in HOL 4 used
|
||||||
|
\item however, very big differences
|
||||||
|
\begin{itemize}
|
||||||
|
\item much more powerful logic
|
||||||
|
\item dependent types
|
||||||
|
\item constructive logic
|
||||||
|
\item not exactly following LCF approach
|
||||||
|
\end{itemize}
|
||||||
|
\pro good user interface
|
||||||
|
\pro very good community support
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Coq - Logic}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\pro Coq's logic is very powerful
|
||||||
|
\pro it is very natural for mathematicians
|
||||||
|
\pro very natural for language theory
|
||||||
|
\pro allows reasoning about proofs
|
||||||
|
\item allows to add axioms as needed
|
||||||
|
\item as a result, Coq is used often to
|
||||||
|
\begin{itemize}
|
||||||
|
\item formalise mathematics
|
||||||
|
\item formalise programming language semantics
|
||||||
|
\item reason about proof theory
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Coq - Drawbacks}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item Coq's power comes at a price
|
||||||
|
\con there is not much automation
|
||||||
|
\con proofs tend to be very long
|
||||||
|
\begin{itemize}
|
||||||
|
\item they are very simple though
|
||||||
|
\pro comparably easy to maintain
|
||||||
|
\end{itemize}
|
||||||
|
\con Coq's proof checking can be very slow
|
||||||
|
\con when verifying programs or hardware you notice that HOL was designed for this purpose
|
||||||
|
\begin{itemize}
|
||||||
|
\item need for \emph{obvious} termination is tedious
|
||||||
|
\item missing automation
|
||||||
|
\item very slow
|
||||||
|
\end{itemize}
|
||||||
|
\end{itemize}
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Conclusion}
|
||||||
|
|
||||||
|
\begin{frame}
|
||||||
|
\frametitle{Summary}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item there are many good theorem provers out there
|
||||||
|
\item \emph{pick the right tool for your purpose}
|
||||||
|
\item the HOL theorem prover is a good system for many purposes
|
||||||
|
\item for PROSPER it is a good choice
|
||||||
|
\item I encourage you to continue learning about HOL and interactive theorem proving in general
|
||||||
|
\item if you have any questions feel free to contact me (Thomas Tuerk, email \emph{thomas@tuerk-brechen.de})
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: "current"
|
||||||
|
%%% End:
|
65
lectures/Makefile
Normal file
65
lectures/Makefile
Normal file
@ -0,0 +1,65 @@
|
|||||||
|
.PHONY: full full-print current current-print all_parts all_parts_print all clean cleanAll all-slides all-print all_parts-all full-all default
|
||||||
|
|
||||||
|
default : all
|
||||||
|
|
||||||
|
PARTS = $(wildcard itp_parts_*.tex)
|
||||||
|
|
||||||
|
current: version.inc
|
||||||
|
@./mk_slides.sh current current
|
||||||
|
|
||||||
|
full: version.inc
|
||||||
|
@./mk_slides.sh full itp-course
|
||||||
|
|
||||||
|
full-print: version.inc
|
||||||
|
@./mk_handout.sh full itp-course-print
|
||||||
|
|
||||||
|
full-all: version.inc
|
||||||
|
@./mk_slides.sh full itp-course
|
||||||
|
@./mk_handout.sh full itp-course-print
|
||||||
|
|
||||||
|
hol: version.inc
|
||||||
|
@./mk_slides.sh hol hol-course
|
||||||
|
|
||||||
|
hol-print: version.inc
|
||||||
|
@./mk_handout.sh hol hol-course-print
|
||||||
|
|
||||||
|
hol-all: version.inc
|
||||||
|
@./mk_slides.sh hol hol-course
|
||||||
|
@./mk_handout.sh hol hol-course-print
|
||||||
|
|
||||||
|
|
||||||
|
current-print: version.inc
|
||||||
|
@./mk_handout.sh current current-print
|
||||||
|
|
||||||
|
itp_parts_%: version.inc
|
||||||
|
@./mk_slides.sh $@ $@
|
||||||
|
|
||||||
|
itp_parts_%-print: version.inc
|
||||||
|
@./mk_handout.sh $(patsubst %-print,%,$@) $@
|
||||||
|
|
||||||
|
itp_parts_%-all: version.inc
|
||||||
|
@./mk_slides.sh $(patsubst %-all,%,$@) $(patsubst %-all,%,$@)
|
||||||
|
@./mk_handout.sh $(patsubst %-all,%,$@) $(patsubst %-all,%-print,$@)
|
||||||
|
|
||||||
|
all_parts : $(PARTS:.tex=)
|
||||||
|
all_parts-print : $(PARTS:.tex=-print)
|
||||||
|
all_parts-all : $(PARTS:.tex=-all)
|
||||||
|
|
||||||
|
all-slides: version.inc full hol all_parts
|
||||||
|
all-print: version.inc hol-print full-print all_parts-print
|
||||||
|
|
||||||
|
all: full-all hol-all all_parts-all
|
||||||
|
|
||||||
|
clean:
|
||||||
|
rm -rf *.ps *.pdf *~ *.dvi *.aux *.log *.idx *.toc *.nav *.out *.snm *.flc *.vrb version.inc tmp
|
||||||
|
|
||||||
|
cleanAll: clean
|
||||||
|
rm -rf pdfs
|
||||||
|
|
||||||
|
version.inc: ../.git/logs/HEAD
|
||||||
|
@echo "%%% This file is generated by Makefile." > version.inc
|
||||||
|
@echo "%%% Do not edit this file!\n%%%" >> version.inc
|
||||||
|
@git log -1 --date=local --format="format:\
|
||||||
|
\\gdef\\GITAbrHash{%h}\
|
||||||
|
\\gdef\\GITAuthorDate{%ad}\
|
||||||
|
\\gdef\\GITAuthorName{%an}" >> version.inc
|
109
lectures/common.inc
Normal file
109
lectures/common.inc
Normal file
@ -0,0 +1,109 @@
|
|||||||
|
\documentclass{beamer}
|
||||||
|
|
||||||
|
\usepackage{pgf,pgfnodes,pgfarrows}
|
||||||
|
\usepackage{amssymb}
|
||||||
|
\usepackage{amsmath}
|
||||||
|
\usepackage{graphicx}
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
\usepackage{hyperref}
|
||||||
|
\usepackage{booktabs}
|
||||||
|
\usepackage{mathpartir}
|
||||||
|
\usepackage{latexsym}
|
||||||
|
\usepackage{textcomp}
|
||||||
|
|
||||||
|
\newcommand{\ie}{i.\,e.\ }
|
||||||
|
\newcommand{\eg}{e.\,g.\ }
|
||||||
|
\newcommand{\wrt}{w.\,r.\,t.\ }
|
||||||
|
\newcommand{\aka}{a.\,k.\,a.\ }
|
||||||
|
\newcommand{\cf}{cf.\ }
|
||||||
|
\newcommand{\etc}{etc.\ }
|
||||||
|
\newcommand{\cearrow}{\url{~}>}
|
||||||
|
\renewcommand{\emph}[1]{\structure{\textbf{#1}}}
|
||||||
|
\newcommand{\entails}{\vdash}
|
||||||
|
\newcommand{\hol}[1]{\emph{\texttt{#1}}}
|
||||||
|
\newcommand{\ml}[1]{\emph{\texttt{#1}}}
|
||||||
|
\newcommand{\textbsl}{\char`\\{}}
|
||||||
|
\newcommand{\holAnd}{/\textbsl{}}
|
||||||
|
\newcommand{\holOr}{\textbsl{}/}
|
||||||
|
\newcommand{\holLambda}{\textbsl{}}
|
||||||
|
\newcommand{\holImp}{==>}
|
||||||
|
\newcommand{\holEquiv}{<=>}
|
||||||
|
\newcommand{\holNeg}{\raisebox{0.5ex}{\texttildelow}}
|
||||||
|
\newcommand{\mlcomment}[1]{\structure{(* #1 *)}}
|
||||||
|
\newcommand{\aequiv}{\ensuremath{\stackrel{\alpha}{\equiv}}\ }
|
||||||
|
\newcommand\pro{\item[$+$]}
|
||||||
|
\newcommand\con{\item[$-$]}
|
||||||
|
|
||||||
|
\newcommand{\bottomstatement}[1]{
|
||||||
|
\begin{center}
|
||||||
|
\textbf{#1}
|
||||||
|
\end{center}
|
||||||
|
}
|
||||||
|
|
||||||
|
\input{version.inc}
|
||||||
|
|
||||||
|
\title{Interactive Theorem Proving (ITP) Course}
|
||||||
|
\institute{
|
||||||
|
\includegraphics[width=1.25cm]{images/cc/cc.eps}
|
||||||
|
\includegraphics[width=1.25cm]{images/cc/by.eps}
|
||||||
|
\includegraphics[width=1.25cm]{images/cc/sa.eps}\\
|
||||||
|
\scriptsize{Except where otherwise noted, this work is licened under\\
|
||||||
|
\href{http://creativecommons.org/licenses/by-sa/4.0/}{Creative Commons Attribution-ShareAlike 4.0 International License}}}
|
||||||
|
\author{Thomas Tuerk (thomas@tuerk-brechen.de)}
|
||||||
|
\date{Academic Year 2016/17, Period 4}
|
||||||
|
\newcommand{\partstitle}[1]{\title{Interactive Theorem Proving (ITP) Course\\#1}}
|
||||||
|
\newcommand{\titleframe}{\frame[plain]{\titlepage\hfill\tiny version \GITAbrHash{} of \GITAuthorDate{}}}
|
||||||
|
\newcommand{\partstitleframe}[1]{\partstitle{#1}\titleframe}
|
||||||
|
|
||||||
|
|
||||||
|
\usetheme{Boadilla}
|
||||||
|
\setbeamertemplate{footline}[frame number]{}
|
||||||
|
|
||||||
|
|
||||||
|
\logo{\pgfputat{\pgfxy(-.5,8.7)}{\pgfbox[center,top]{\includegraphics[width=8mm]{images/cc/by-sa.eps}}}}
|
||||||
|
|
||||||
|
\setbeamertemplate{part page}
|
||||||
|
{
|
||||||
|
\begin{centering}
|
||||||
|
{\usebeamerfont{part name}\usebeamercolor[fg]{part name}\partname~\insertromanpartnumber}
|
||||||
|
\vskip1em\par
|
||||||
|
\begin{beamercolorbox}[sep=16pt,center]{part title}
|
||||||
|
\usebeamerfont{part title}\insertpart\par
|
||||||
|
\end{beamercolorbox}
|
||||||
|
\vfill
|
||||||
|
\begin{center}
|
||||||
|
\includegraphics[width=0.75cm]{images/cc/cc.eps}
|
||||||
|
\includegraphics[width=0.75cm]{images/cc/by.eps}
|
||||||
|
\includegraphics[width=0.75cm]{images/cc/sa.eps}\\
|
||||||
|
\tiny{Except where otherwise noted, this work is licened under\\
|
||||||
|
\href{http://creativecommons.org/licenses/by-sa/4.0/}{Creative Commons Attribution-ShareAlike 4.0 International License}}.
|
||||||
|
\end{center}
|
||||||
|
\end{centering}
|
||||||
|
}
|
||||||
|
|
||||||
|
\makeatletter
|
||||||
|
\AtBeginPart{%
|
||||||
|
\addtocontents{toc}{\protect\beamer@partintoc{\the\c@part}{\beamer@partnameshort}{\the\c@page}{\the\numexpr\value{framenumber}+1\relax}}%
|
||||||
|
}
|
||||||
|
%% number, shortname, page.
|
||||||
|
\providecommand\beamer@partintoc[4]{%
|
||||||
|
\ifnum\c@tocdepth=-1\relax
|
||||||
|
% requesting onlyparts.
|
||||||
|
\qquad\makebox[5em][l]{\hyperlink{page.#3}{\emph{Part {\uppercase\expandafter{\romannumeral #1\relax}}}}} \makebox[18em][l]{\hyperlink{page.#3}{#2}}
|
||||||
|
\makebox[3em][r]{\hyperlink{page.#3}{#4}}
|
||||||
|
|
||||||
|
\par
|
||||||
|
\fi
|
||||||
|
}
|
||||||
|
\define@key{beamertoc}{onlyparts}[]{%
|
||||||
|
\c@tocdepth=-1\relax
|
||||||
|
}
|
||||||
|
\makeatother
|
||||||
|
|
||||||
|
%\usefonttheme[onlylarge]{structurebold}
|
||||||
|
%\usepackage{times}
|
||||||
|
|
||||||
|
\ifdefined\ttbwflag
|
||||||
|
\usecolortheme{seagull}
|
||||||
|
\beamertemplatenavigationsymbolsempty
|
||||||
|
\fi
|
32
lectures/current.tex
Normal file
32
lectures/current.tex
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
\input{common.inc}
|
||||||
|
|
||||||
|
%\setbeamertemplate{footline}{}
|
||||||
|
|
||||||
|
\setcounter{part}{2}
|
||||||
|
\setcounter{framenumber}{291}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
\partstitleframe{Parts III}
|
||||||
|
|
||||||
|
%\input{00_webpage_intro.tex}
|
||||||
|
%\input{01_introduction.tex}
|
||||||
|
%\input{02_organisational_matters.tex}
|
||||||
|
\input{03_hol_overview.tex}
|
||||||
|
%\input{04_hol_logic.tex}
|
||||||
|
%\input{05_usage.tex}
|
||||||
|
%\input{06_forward_proofs.tex}
|
||||||
|
%\input{07_backward_proofs.tex}
|
||||||
|
%\input{08_basic_tactics.tex}
|
||||||
|
%\input{09_induction.tex}
|
||||||
|
%\input{10_definitions.tex}
|
||||||
|
%\input{11_good_definitions.tex}
|
||||||
|
%\input{12_deep_shallow.tex}
|
||||||
|
%\input{13_rewriting.tex}
|
||||||
|
%\input{14_advanced_definitions.tex}
|
||||||
|
%\input{15_maintainable_proofs.tex}
|
||||||
|
%\input{16_hol_overview.tex}
|
||||||
|
%\input{17_other_provers.tex}
|
||||||
|
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
37
lectures/full.tex
Normal file
37
lectures/full.tex
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
\input{common.inc}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
|
||||||
|
\titleframe
|
||||||
|
\begin{frame}{Contents}
|
||||||
|
\footnotesize
|
||||||
|
\tableofcontents[onlyparts]
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\input{01_introduction.tex}
|
||||||
|
\input{02_organisational_matters.tex}
|
||||||
|
\input{03_hol_overview.tex}
|
||||||
|
\input{04_hol_logic.tex}
|
||||||
|
\input{05_usage.tex}
|
||||||
|
\input{06_forward_proofs.tex}
|
||||||
|
\input{07_backward_proofs.tex}
|
||||||
|
\input{08_basic_tactics.tex}
|
||||||
|
\input{09_induction.tex}
|
||||||
|
\input{10_definitions.tex}
|
||||||
|
\input{11_good_definitions.tex}
|
||||||
|
\input{12_deep_shallow.tex}
|
||||||
|
\input{13_rewriting.tex}
|
||||||
|
\input{14_advanced_definitions.tex}
|
||||||
|
\input{15_maintainable_proofs.tex}
|
||||||
|
\input{16_hol_overview.tex}
|
||||||
|
\input{17_other_provers.tex}
|
||||||
|
|
||||||
|
% code extraction / cake ML
|
||||||
|
% conformance testing
|
||||||
|
% maintainable proofs
|
||||||
|
% overview over main tools / libs
|
||||||
|
% wordLib
|
||||||
|
% decision procedures
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
34
lectures/hol.tex
Normal file
34
lectures/hol.tex
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
\input{common.inc}
|
||||||
|
|
||||||
|
\date{}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
|
||||||
|
\author{Thomas Tuerk (tuerk@thomas-tuerk.de)}
|
||||||
|
\partstitleframe{Web Version}
|
||||||
|
|
||||||
|
\begin{frame}{Contents}
|
||||||
|
\footnotesize
|
||||||
|
\tableofcontents[onlyparts]
|
||||||
|
\end{frame}
|
||||||
|
|
||||||
|
\input{00_webpage_intro.tex}
|
||||||
|
\input{01_introduction.tex}
|
||||||
|
\input{03_hol_overview.tex}
|
||||||
|
\input{04_hol_logic.tex}
|
||||||
|
\input{05_usage.tex}
|
||||||
|
\input{06_forward_proofs.tex}
|
||||||
|
\input{07_backward_proofs.tex}
|
||||||
|
\input{08_basic_tactics.tex}
|
||||||
|
\input{09_induction.tex}
|
||||||
|
\input{10_definitions.tex}
|
||||||
|
\input{11_good_definitions.tex}
|
||||||
|
\input{12_deep_shallow.tex}
|
||||||
|
\input{13_rewriting.tex}
|
||||||
|
\input{14_advanced_definitions.tex}
|
||||||
|
\input{15_maintainable_proofs.tex}
|
||||||
|
\input{16_hol_overview.tex}
|
||||||
|
\input{17_other_provers.tex}
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
8
lectures/images/Makefile
Executable file
8
lectures/images/Makefile
Executable file
@ -0,0 +1,8 @@
|
|||||||
|
all:
|
||||||
|
latex hol-family.tex
|
||||||
|
dvips hol-family.dvi
|
||||||
|
ps2eps hol-family.ps
|
||||||
|
|
||||||
|
clean:
|
||||||
|
rm -f *.dvi *.toc *.aux *.ps *.log *.lof *.bbl *.blg *.hix *.tid *.tde *.out *~
|
||||||
|
|
3
lectures/images/cc/LICENSE
Normal file
3
lectures/images/cc/LICENSE
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
The images in this directory are trademarks of creative-commons,
|
||||||
|
which are covered by the Creative Commons Trademark Policy
|
||||||
|
(https://creativecommons.org/policies).
|
2727
lectures/images/cc/by-sa.eps
Normal file
2727
lectures/images/cc/by-sa.eps
Normal file
File diff suppressed because one or more lines are too long
5902
lectures/images/cc/by.eps
Normal file
5902
lectures/images/cc/by.eps
Normal file
File diff suppressed because one or more lines are too long
5902
lectures/images/cc/cc.eps
Normal file
5902
lectures/images/cc/cc.eps
Normal file
File diff suppressed because one or more lines are too long
5902
lectures/images/cc/sa.eps
Normal file
5902
lectures/images/cc/sa.eps
Normal file
File diff suppressed because one or more lines are too long
3452
lectures/images/hol-family.eps
Normal file
3452
lectures/images/hol-family.eps
Normal file
File diff suppressed because it is too large
Load Diff
38
lectures/images/hol-family.tex
Normal file
38
lectures/images/hol-family.tex
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
\documentclass{minimal}
|
||||||
|
\usepackage{pstricks}
|
||||||
|
\pagestyle{empty}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
\begin{pspicture}(0,6)(8,22)
|
||||||
|
\usefont{T1}{ppl}{m}{n}
|
||||||
|
\rput(3,20){Edinburgh LCF}
|
||||||
|
\rput(3,18){Cambridge LCF}
|
||||||
|
\rput(3,16){HOL88}
|
||||||
|
\rput(1,14){hol90}
|
||||||
|
\rput(5,13.5){ProofPower}
|
||||||
|
\rput(5,14.5){Isabelle/HOL}
|
||||||
|
\rput(3,12){HOL Light}
|
||||||
|
\rput(1,10){hol98}
|
||||||
|
\rput(5,10){HOL Zero}
|
||||||
|
\rput(1,8){HOL4}
|
||||||
|
|
||||||
|
\psline{->}(3,19.7)(3,18.3)
|
||||||
|
\psline{->}(3,17.7)(3,16.3)
|
||||||
|
\psline{->}(1,13.7)(1,10.3)
|
||||||
|
\psline{->}(1,9.7)(1,8.3)
|
||||||
|
|
||||||
|
|
||||||
|
\psline{->}(3,11.7)(1,10.3)
|
||||||
|
\psline{->}(3,11.7)(5,10.3)
|
||||||
|
|
||||||
|
\psline{->}(5,13.2)(5,10.3)
|
||||||
|
|
||||||
|
\psline{->}(3,15.7)(3,12.3)
|
||||||
|
\psline{->}(3,15.7)(1,14.3)
|
||||||
|
\psline{->}(3,15.7)(3.8,14.5)
|
||||||
|
\psline{->}(3,15.7)(3.8,13.5)
|
||||||
|
|
||||||
|
\psline{->}(1,13.7)(3,12.3)
|
||||||
|
\end{pspicture}
|
||||||
|
\end{document}
|
||||||
|
|
14
lectures/itp_parts_01-04.tex
Normal file
14
lectures/itp_parts_01-04.tex
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
\input{common.inc}
|
||||||
|
|
||||||
|
\setcounter{framenumber}{1}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
|
||||||
|
\partstitleframe{Parts I - IV}
|
||||||
|
|
||||||
|
\input{01_introduction.tex}
|
||||||
|
\input{02_organisational_matters.tex}
|
||||||
|
\input{03_hol_overview.tex}
|
||||||
|
\input{04_hol_logic.tex}
|
||||||
|
\end{document}
|
||||||
|
|
16
lectures/itp_parts_05-06.tex
Normal file
16
lectures/itp_parts_05-06.tex
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
\input{common.inc}
|
||||||
|
|
||||||
|
%\setbeamertemplate{footline}{}
|
||||||
|
|
||||||
|
\setcounter{part}{4}
|
||||||
|
\setcounter{framenumber}{42}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
\partstitleframe{Parts V, VI}
|
||||||
|
|
||||||
|
|
||||||
|
\input{05_usage.tex}
|
||||||
|
\input{06_forward_proofs.tex}
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
16
lectures/itp_parts_07-09.tex
Normal file
16
lectures/itp_parts_07-09.tex
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
\input{common.inc}
|
||||||
|
|
||||||
|
%\setbeamertemplate{footline}{}
|
||||||
|
|
||||||
|
\setcounter{part}{6}
|
||||||
|
\setcounter{framenumber}{65}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
\partstitleframe{Parts VII - IX}
|
||||||
|
|
||||||
|
\input{07_backward_proofs.tex}
|
||||||
|
\input{08_basic_tactics.tex}
|
||||||
|
\input{09_induction.tex}
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
16
lectures/itp_parts_10-12.tex
Normal file
16
lectures/itp_parts_10-12.tex
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
\input{common.inc}
|
||||||
|
|
||||||
|
%\setbeamertemplate{footline}{}
|
||||||
|
|
||||||
|
\setcounter{part}{9}
|
||||||
|
\setcounter{framenumber}{123}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
\partstitleframe{Parts X - XII}
|
||||||
|
|
||||||
|
\input{10_definitions.tex}
|
||||||
|
\input{11_good_definitions.tex}
|
||||||
|
\input{12_deep_shallow.tex}
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
18
lectures/itp_parts_13.tex
Normal file
18
lectures/itp_parts_13.tex
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
\input{common.inc}
|
||||||
|
|
||||||
|
%\setbeamertemplate{footline}{}
|
||||||
|
|
||||||
|
\setcounter{part}{12}
|
||||||
|
\setcounter{framenumber}{196}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
\partstitleframe{Part XIII}
|
||||||
|
|
||||||
|
%\input{10_definitions.tex}
|
||||||
|
%\input{11_good_definitions.tex}
|
||||||
|
%\input{12_deep_shallow.tex}
|
||||||
|
\input{13_rewriting.tex}
|
||||||
|
%\input{14_advanced_definitions.tex}
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
14
lectures/itp_parts_14.tex
Normal file
14
lectures/itp_parts_14.tex
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
\input{common.inc}
|
||||||
|
|
||||||
|
%\setbeamertemplate{footline}{}
|
||||||
|
|
||||||
|
\setcounter{part}{13}
|
||||||
|
\setcounter{framenumber}{250}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
\partstitleframe{Part XIV}
|
||||||
|
|
||||||
|
\input{14_advanced_definitions.tex}
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
14
lectures/itp_parts_15.tex
Normal file
14
lectures/itp_parts_15.tex
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
\input{common.inc}
|
||||||
|
|
||||||
|
%\setbeamertemplate{footline}{}
|
||||||
|
|
||||||
|
\setcounter{part}{14}
|
||||||
|
\setcounter{framenumber}{271}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
\partstitleframe{Part XV}
|
||||||
|
|
||||||
|
\input{15_maintainable_proofs.tex}
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
15
lectures/itp_parts_16-17.tex
Normal file
15
lectures/itp_parts_16-17.tex
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
\input{common.inc}
|
||||||
|
|
||||||
|
%\setbeamertemplate{footline}{}
|
||||||
|
|
||||||
|
\setcounter{part}{15}
|
||||||
|
\setcounter{framenumber}{293}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
\partstitleframe{Part XVI, XVII}
|
||||||
|
|
||||||
|
\input{16_hol_overview.tex}
|
||||||
|
\input{17_other_provers.tex}
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
10
lectures/mk_handout.sh
Executable file
10
lectures/mk_handout.sh
Executable file
@ -0,0 +1,10 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
echo "creating $2.pdf"
|
||||||
|
mkdir -p tmp
|
||||||
|
pdflatex -interaction=batchmode -output-directory=tmp "\PassOptionsToClass{handout}{beamer}\def\ttbwflag{}\input{$1.tex}" > /dev/null
|
||||||
|
pdflatex -interaction=batchmode -output-directory=tmp "\PassOptionsToClass{handout}{beamer}\def\ttbwflag{}\input{$1.tex}" > /dev/null
|
||||||
|
cd tmp
|
||||||
|
pdfjam --landscape --a4paper --nup 2x2 --scale 0.9 $1.pdf -o $1-4.pdf -q
|
||||||
|
cd ..
|
||||||
|
mkdir -p pdfs
|
||||||
|
mv tmp/$1-4.pdf pdfs/$2.pdf
|
8
lectures/mk_slides.sh
Executable file
8
lectures/mk_slides.sh
Executable file
@ -0,0 +1,8 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
echo "creating $2.pdf"
|
||||||
|
mkdir -p tmp
|
||||||
|
pdflatex -interaction=batchmode -output-directory=tmp $1.tex > /dev/null
|
||||||
|
pdflatex -interaction=batchmode -output-directory=tmp $1.tex > /dev/null
|
||||||
|
mkdir -p pdfs
|
||||||
|
mv tmp/$1.pdf pdfs/$2.pdf
|
||||||
|
|
9
questionnaire/Makefile
Executable file
9
questionnaire/Makefile
Executable file
@ -0,0 +1,9 @@
|
|||||||
|
all:
|
||||||
|
pdflatex questionnaire.tex
|
||||||
|
pdflatex questionnaire.tex
|
||||||
|
pdflatex questionnaire-simple.tex
|
||||||
|
pdflatex questionnaire-simple.tex
|
||||||
|
|
||||||
|
clean:
|
||||||
|
rm -f *.toc *.aux *.ps *.log *.lof *.bbl *.blg *.hix *.tid *.tde *.out *~
|
||||||
|
|
145
questionnaire/questionnaire-simple.tex
Normal file
145
questionnaire/questionnaire-simple.tex
Normal file
@ -0,0 +1,145 @@
|
|||||||
|
\documentclass[a4paper,10pt,oneside]{scrartcl}
|
||||||
|
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
\usepackage[a4paper]{geometry}
|
||||||
|
\usepackage{hyperref}
|
||||||
|
\usepackage{url}
|
||||||
|
\usepackage{color}
|
||||||
|
\usepackage{amsfonts}
|
||||||
|
|
||||||
|
\title{Background Questionaire}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\usekomafont{sectioning}\usekomafont{part} Background Questionaire
|
||||||
|
\end{center}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
|
||||||
|
Please try to answer the following questions alone and without using
|
||||||
|
any external aids. If you have trouble, just skip a question instead
|
||||||
|
of guessing and thinking very hard. Try not to write lengthy answers,
|
||||||
|
often bullet-points are enough. This is no exam, please hand in the
|
||||||
|
results \emph{anonymously}. The results are used only to adapt the
|
||||||
|
\emph{Interactive Theorem Proving} course to the background and
|
||||||
|
interests of the audience.
|
||||||
|
|
||||||
|
|
||||||
|
\section{Functional Programming}
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item Consider the following functional program on lists.
|
||||||
|
\begin{verbatim}
|
||||||
|
fun SNOC x [] = [x]
|
||||||
|
| SNOC x (y::ys) = y::(SNOC x ys)
|
||||||
|
\end{verbatim}
|
||||||
|
Please answer the following questions:
|
||||||
|
\begin{itemize}
|
||||||
|
\item What is the result of \texttt{SNOC 5 [7,3,2]}?
|
||||||
|
\item Describe informally (and very briefly) what the function \texttt{SNOC} does.
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\item Write a functional program \texttt{APPEND} that appends two lists.
|
||||||
|
Try to use SML notation.
|
||||||
|
\texttt{APPEND} should satisfy the following example behaviours
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{APPEND [1,2,3] [4,5] = [1,2,3,4,5]}
|
||||||
|
\item \texttt{APPEND [] [1] = [1]}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
|
||||||
|
\item What is \emph{tail-recursion} and why is it important for functional programming?
|
||||||
|
|
||||||
|
\item Write a tail-recursive version of \texttt{APPEND}.
|
||||||
|
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Induction Proofs}
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item Prove that the following method to calculate the sum of the first $n$ natural numbers is correct (notice $0 \notin \mathbb{N}$, i.\,e.\ $\mathbb{N} = \{1, 2, 3, \ldots\}$):
|
||||||
|
\[\forall n \in \mathbb{N}.\ \sum_{1 \leq i \leq n} i = \frac{n * (n+1)}{2}\]
|
||||||
|
\item Prove that \texttt{$\forall$x l.\ LENGTH (SNOC x l) = LENGTH l + 1} holds for the function \texttt{SNOC} given above. You can use arithmetic facts and the
|
||||||
|
following properties of \texttt{LENGTH}:
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{LENGTH [] = 0}
|
||||||
|
\item \texttt{$\forall$x xs.\ LENGTH (x ::\ xs) = LENGTH xs + 1}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\item Prove that \texttt{APPEND} is associative (hint: via induction). You can use the following
|
||||||
|
properties of \texttt{APPEND}. Use the notation \texttt{l1 ++ l2} for \texttt{APPEND l1 l2}.
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{$\forall$l.\ [] ++ l = l}
|
||||||
|
\item \texttt{$\forall$l x xs.\ (x ::\ xs) ++ l = x ::\ (xs ++ l)}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\item Which induction principles do you know? Which ones did you use above?
|
||||||
|
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Logic}
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item Explain briefly what's wrong with the following reasoning:
|
||||||
|
``No cat has two tails. A cat has one more tail than no cat. Therefore, a cat has three tails.''
|
||||||
|
|
||||||
|
\item It is well known that in ancient times (a) all \emph{Spartans} were \emph{brave} and (b) all \emph{Athenians} were \emph{wise}.
|
||||||
|
Spartans and Athenians always fought with each other. So there was (c) no dual citizenship.
|
||||||
|
Once upon a time, 3 Greek philosophers met: Diogenes, Platon and Euklid. In contrast to their
|
||||||
|
famous namesakes, not much is known about them. We know however, that
|
||||||
|
(d) they all came from Sparta or Athens.
|
||||||
|
During their meeting, the 3 philosophers started to argue and finally insulted each other.
|
||||||
|
Being philosophers they were very careful not to tell a lie, though.
|
||||||
|
A few fragments of what they said have come to us through the centuries:
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item[(e)] Euklid: ``If Platon is from Sparta, then Diogenes is a coward.''
|
||||||
|
\item[(f)] Platon: ``Diogenes is a coward, provided Euklid is from Sparta.''
|
||||||
|
\item[(g)] Platon: ``If Diogenes is from Athens, then Euklid is a coward.''
|
||||||
|
\item[(h)] Diogenes: ``If Platon is from Athens, then Euklid is a moron.''
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
Can you reconstruct from which town each of these philosophers came?
|
||||||
|
\begin{enumerate}
|
||||||
|
\item Formalise the relevant parts of the text above in first order logic. Model \emph{is coward} as \emph{is not brave} and \emph{is moron} as \emph{is not wise}.
|
||||||
|
\item Using the proof method of \emph{resolution} show that Platon is from Sparta. If you
|
||||||
|
don't know the resolution method, try to show it using some other method.
|
||||||
|
\item Which town did Euklid come from?
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
\item Let the function \textit{myst} for all \textit{R} of type $\alpha \to \alpha \to \textit{bool}$ be given by
|
||||||
|
|
||||||
|
\[
|
||||||
|
\textit{myst}(\textit{R}) = \lambda a\, b.\ \forall Q. \left(
|
||||||
|
\begin{array}{cr}
|
||||||
|
\forall x.\ Q\, x\, x\ & \wedge \\
|
||||||
|
\forall x\,y.\ R\,x\,y\ \Longrightarrow\ Q\, x\, y\ & \wedge \\
|
||||||
|
\forall x\,y\,z.\ (Q\,x\,y\ \wedge\ Q\,y\,z)\ \Longrightarrow\ Q\,x\,z
|
||||||
|
\end{array}
|
||||||
|
\right) \Longrightarrow Q\,a\,b
|
||||||
|
\]
|
||||||
|
\begin{enumerate}
|
||||||
|
\item Which type does \textit{myst} have?
|
||||||
|
\item What concept does the type $\alpha \to \alpha \to \textit{bool}$ represent?
|
||||||
|
\item Translate the formula in English using as high level concepts as possible.
|
||||||
|
\item What concept does the function \textit{myst} define?
|
||||||
|
\end{enumerate}
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
\section{General}
|
||||||
|
|
||||||
|
Do you have any comments or suggestions? Is there something that you
|
||||||
|
believe is relevant for selecting and prioritising topics of the
|
||||||
|
Interactive Theorem Proving course. I'm happy if you don't write
|
||||||
|
anything here.
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: t
|
||||||
|
%%% End:
|
121
questionnaire/questionnaire.tex
Normal file
121
questionnaire/questionnaire.tex
Normal file
@ -0,0 +1,121 @@
|
|||||||
|
\documentclass[a4paper,11pt,oneside]{scrartcl}
|
||||||
|
|
||||||
|
\usepackage[utf8]{inputenc}
|
||||||
|
\usepackage[a4paper]{geometry}
|
||||||
|
\usepackage{hyperref}
|
||||||
|
\usepackage{url}
|
||||||
|
\usepackage{color}
|
||||||
|
|
||||||
|
\title{Background Questionaire}
|
||||||
|
|
||||||
|
\begin{document}
|
||||||
|
|
||||||
|
\begin{center}
|
||||||
|
\usekomafont{sectioning}\usekomafont{part} Background Questionaire
|
||||||
|
\end{center}
|
||||||
|
\bigskip
|
||||||
|
|
||||||
|
|
||||||
|
Please try to answer the following questions alone and without using any external aids and hand in the results \emph{anonymously}.
|
||||||
|
This is no exam.
|
||||||
|
Instead, the results are used to adapt the \emph{Interactive Theorem Proving} course to the background and interests of the audience.
|
||||||
|
|
||||||
|
|
||||||
|
\section{Functional Programming}
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item Write a functional program \texttt{APPEND} that appends two lists.
|
||||||
|
Try to use SML notation.
|
||||||
|
\texttt{APPEND} should satisfy the following example behaviours
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{APPEND [1,2,3] [4,5] = [1,2,3,4,5]}
|
||||||
|
\item \texttt{APPEND [] [1] = [1]}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\item What does \emph{tail-recursion} mean and why is it important for functional programming?
|
||||||
|
|
||||||
|
\item Write a tail-recursive version of \texttt{APPEND}.
|
||||||
|
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Induction Proofs}
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item Prove that \texttt{APPEND} is associative (hint: via induction). You can use the following
|
||||||
|
properties of \texttt{APPEND} and \texttt{CONS}. For convenience use the notation \texttt{l1 ++ l2} for \texttt{APPEND l1 l2} and the notation \texttt{x::xs} for \texttt{CONS x xs}. Be very detailed and formal.
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \texttt{$\forall$ l.\ [] ++ xs = l}
|
||||||
|
\item \texttt{$\forall$ l x xs.\ (x ::\ xs) ++ l = x ::\ (xs ++ l)}
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\item Which induction principles do you know? Which one did you use for proving the associativity of \texttt{APPEND}?
|
||||||
|
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
|
||||||
|
\section{Logic}
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item Explain briefly what's wrong with the following reasoning:
|
||||||
|
``No cat has two tails. A cat has one more tail than no cat. Therefore, a cat has three tails.''
|
||||||
|
|
||||||
|
\item What's wrong with the following classical example of a false chain syllogism?
|
||||||
|
``Qui bene bibit, bene dormit; qui bene dormit, non peccat; qui non peccat, salvatur; ergo qui bene bibit, salvatur. (Ergo, bibamus!)'' (``Who drinks well, sleeps well. Who sleeps, does not sin. Who does not sin, will go to heaven. Therefore, who drinks well, will go to heaven. (So, let's drink!)'')
|
||||||
|
|
||||||
|
|
||||||
|
\item Formalise the following riddle\footnote{found in the German Wikipedia entry for resolution} using first order logic.
|
||||||
|
|
||||||
|
It is well known that in ancient times (a) all \emph{Spartans} were \emph{brave} and (b) all \emph{Athenians} were \emph{wise}.
|
||||||
|
Spartans and Athenians always fought with each other. So there was (c) no dual citizenship.
|
||||||
|
Once upon a time, 3 Greek philosophers met: Diogenes, Platon and Euklid. In contrast to their
|
||||||
|
famous namesakes, not much is known about them. We know however, that
|
||||||
|
(d) they all came from Sparta or Athens and did not like each other much.
|
||||||
|
Being philosophers they however never told a lie; even while insulting each other.
|
||||||
|
A few fragments of their squabbles have come to us through the centuries:
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item[(e)] Euklid: ``If Platon is from Sparta, then Diogenes is a coward.''
|
||||||
|
\item[(f)] Platon: ``Diogenes is a coward, provided Euklid is from Sparta.''
|
||||||
|
\item[(g)] Platon: ``If Diogenes is from Athens, then Euklid is a coward.''
|
||||||
|
\item[(h)] Diogenes: ``If Platon is from Athens, then Euklid is a moron.''
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
Can you reconstruct where each of them came from?
|
||||||
|
Prove that Plato is from Sparta using \emph{resolution}.
|
||||||
|
|
||||||
|
\item Explain very briefly what \emph{Skolemisation} is?
|
||||||
|
\item Let the function \textit{myst} for all \textit{R} of type $\alpha \to \alpha \to \textit{bool}$ be given by
|
||||||
|
|
||||||
|
\[
|
||||||
|
\textit{myst}(\textit{R}) = \lambda a\, b.\ \forall Q. \left(
|
||||||
|
\begin{array}{cr}
|
||||||
|
\forall x.\ Q\, x\, x\ & \wedge \\
|
||||||
|
\forall x\,y.\ R\,x\,y\ \Longrightarrow\ Q\, x\, y\ & \wedge \\
|
||||||
|
\forall x\,y\,z.\ (Q\,x\,y\ \wedge\ Q\,y\,z)\ \Longrightarrow\ Q\,x\,z
|
||||||
|
\end{array}
|
||||||
|
\right) \Longrightarrow Q\,a\,b
|
||||||
|
\]
|
||||||
|
Which concept does the function \textit{myst} define? If you don't see it, describe as much
|
||||||
|
as possible. Which type does it have? What is represented by this type? Explain the formula in English using as high level concepts as possible.
|
||||||
|
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
|
||||||
|
\section{General}
|
||||||
|
|
||||||
|
Please write down anything else that you believe is relevant for
|
||||||
|
selecting and prioritising topics of the Interactive Theorem Proving
|
||||||
|
course. Do you have any comments or suggestions? Why are you
|
||||||
|
attending the course? Do you have a concrete application in mind?
|
||||||
|
Have you used interactive theorem provers before? Which ones? Do you
|
||||||
|
have experience with other formal method tools like model checkers,
|
||||||
|
SAT solvers, SMT solvers, first order provers \ldots? Which ones?
|
||||||
|
|
||||||
|
\end{document}
|
||||||
|
|
||||||
|
%%% Local Variables:
|
||||||
|
%%% mode: latex
|
||||||
|
%%% TeX-master: t
|
||||||
|
%%% End:
|
Loading…
Reference in New Issue
Block a user