Design Considerations

This page is in progress. (last updated 28 Jan 2021)

Table of Contents

Syntactic considerations

1.Block

There are two popular block styles in programming languages. First, the C-Style block wraps the block with "{" and "}". Second, the Python-style block uses meaningful whitespaces.

Although both styles have pros and cons, we decided to use the Python-style block in Minuet because of:

1) Vertical space saving: Python-style block do not need end-of-block. Saving a line.

C-style block (1)


C-style block (2)


Python-style block


2) No misleading indentation

C Misleading Indentation (1)

Block without Indent

C Misleading Indentation (2)

second statement: Indent without block

C Misleading Indentation (3)

Indent without block (because of the semicolon after if make the one-line block consumed)

The cost of using Python-style indentation is the need to pay attentions to every indents. We think that the cost is justified for the benefits gained.

2.End-of-Line Semicolon is optional

The new line is strong enough to signal a programmer that it's the end of statement.

The cost to pay is to have rules for the multi-line statement.

3.Comment Signaling Character

Use "#" as a comment character. The benefits are:

1) cleaner look (the "#" looks distinctive; the "//" looks like "/" operator)

2) typing "#" is easier than "//"

The cost to pay is to change the habits of C, Java, and Swift programmers.
(The decision is not finalized. The designer may considered to reverse the design back to "//")

Semantic considerations

1.Int size is always 4 bytes; Long size is always 8 bytes

Some languages did not defined the size of these types. And let the compiler decided depending on the compile options and CPU Models.

This causes problems on code reusability, because most existing codes have assumptions on the smallest size of these int or long data types that the codes will work correctly. Reusing of these codes could be success only if the compile options are correct.

Minuet eliminates these problems by specifying the sizes of most built-in primitive datatypes.

  • Byte, UByte : 1 byte

  • Short, UShort : 2 bytes

  • Int, UInt : 4 bytes

  • Long, ULong : 8 bytes

(The names: LongLong, ULongLong or LLong, ULLong may be used for data types of the future 16-byte CPUs)

Some may argue that Int size should be native data-size of CPU. Eg. in 64-bit CPU, Int size is 64 bits (8 bytes). And use the names: Int8, Int16, Int32, Int64 for constant-sized integers. This causes two problems:

1) Picking up any codes for reuse, we could not ensure that the code will work on Int=32 or Int=64 bits. The code may be written with 64-bit Int in mind and do not work at 32-bit Int.

2) The names Int32, Int64 are harder to understand than Int and Long for the human cognitive abilities.

If some algorithm wants to use the Integer type that could expand sizes virtually unlimited, then that what the BigInt datatype is for.

If some algorithm wants to use the Integer type that should be the size natively to the CPUs, then defined the type NativeInt. But I thought this case is rare.