Operations
People had been concerned about the preservation of authenticity, and proof of time of existence of their ideas, before the time when an insecure browser and the web could peep into.
Proof-posting works with most of the known blockchains, and you can even enter a custom blockchain of your own.
- Forward ideas to mailing list (distribution mode) to preserve a record with one-click, also save to archive.org.
- Enable subscriptions to categories, and curate specialized lists of people to forward to.
- Make a waitlist landing page + collect emails.
- Collect specialist mailing list for forwarding.
- Make paid posting of ideas.
- ✅ Do something about long page loading speed.
- ✅ Improve idea signing functionality.
- Automatically list future shares that have unmatched time, as products. Rationale: currently, the products feature (which does allow to buy things), is not useful, because buying traditional products is much more efficient with traditional systems. However, project shares are products too, and we have quantified and categorized them, so there are currently: "targets" as fund-able items, funding which, shares are created. So, the "targets" are actually "products", and could be listed as products, to be bought.
Improved the feeds summary a bit.
To increase the signal-to-noise ratio for first-time visitors of the home page, I have hidden menus, and compacted operations, so as to let the eyes focus on content.
However, apparently, some people didn't like it, because it is not clear what to do, so, I mostly reverted the change.
- ⬜️ UX consultation from a professional and UI improvement
- ⬜️ Help video library page where users can watch videos and learn how to use the system
It is now possible to lock targets, create teasers for investment, and share them with potential interested parties.
I implemented a hash that takes previous data and current column and the previous hash to produce a hash of the entire database.
This allows us to synchronize with the minimum of data transmissions when I write the synchronizer part which shall rehash all its own data, then retrieve the hash of a binary search of the sorted data.
I spent a few hours working on the remainder of this problem and got documnent retrieval working after finishing save document.
This lets you save and retrieve JSON documents
The document data inserted is also queryable by SQL.
Joins against documents are not yet supported but I plan to implement this.
I have an idea on how to solve the "winning" copy problem.
Have a separate table that hashes every column field and row and gives it a version.
This is the version that is compared.
Most computers use locks to synchronize, which are very slow. This design is far faster and allows locks to be elided for performance.
I use unscynchronized data structures and achieve 100 million requests a second raw communication performance for a cost of 10 nanoseconds per batch of messages.
I managed to store a JSON document and I used the SQL inserter to insert the document. In theory the object is queryable by SQL
{ "items": [("name": "item1"), ("name": "item2")], "subobject": {"subobject_key": "value"} }
- Now, target owners can decide when target is ready for investment, and investment is not ready by default.
19,227 2:02 | ➔ | 0.02 ħ |
For buying HOUR.
19,227 2:02 | ➔ | 0.040000000000000084 ħ |
19,227 2:02 | 🡰 | -0.04 ħ |
19,227 2:02 | 0.04 ħ | ➔ | #t-130001 | (+0.04 ḥ ⇌ Mindey) |
For buying HOUR.