Social software platforms are no longer just businesses. They are virtual governments that influence their users' real lives.
It is time that we begin developing social platforms that adequately value this responsibility. We must give significant consideration to all possible implications of the success of our technology.
The first generation of social media has taught us many lessons. It’s time to put these lessons into practice by building social platforms centered around sustainability, equity, and alignment of interests between platform and user base.
This paper introduces revolutionary governance principles, methods for capturing value and distributing it back to users, and strategies for social platforms. It also includes implementation examples and security considerations.
At Spriteley, we recognize that it’s time for a new and better virtual future that coexists with and adds value to our real lives. We believe that every platform has the power and responsibility to reward all of its participants proportionally for their contributions.
We developed the following methods not just as the basis for the Spriteley metaverse but as an example of how social software should be built or retrofitted to distribute value to all parties more fairly and transparently.
The time is now to focus on the power of social technology responsibly, sustainably, equitably, and efficiently.
Today’s status quo is that the platforms enjoy absolute control and make disproportionately small efforts to empower their users and creators.
Social media companies do not operate with their users' best interests in mind. They cannot, as their interests have been misaligned since their platforms were created. If this were not the case, why would intrusive data collection and advertisements be the bread and butter of all monetization strategies employed by today’s social media giants?
Without users a social platform is nothing more than a circuit with no electricity. Platforms must reward user participation by straying away from ad based revenue models and focus on building user-to-user deep connections.
With this context, we can adequately assess the contributions of this paper to the fields of social technology, blockchain, and the greater good.
Traditionally, the owner and operator of a software platform are the only ones in a seat of power controlling all decision-making. Software users are typically considered as a single group, but in reality, there are usually different types of users that are all necessary for producing value in the system. By understanding the roles of different users and how they work together, value can be maximized and equitably distributed amongst all roles.
Every social software platform is not just a business but an economy. The value created and distributed in a software system may be simple and self-contained– such as a photo-sharing cloud in which only family members participate. But, it may also be complex and widely distributed to many unrelated parties– such as a social media platform. Thinking of software platforms in terms of the value they create and how it is distributed opens the door to include a broader range of solutions, as well as the realization that democracies best govern economies.
In systems where users exchange value, economic balance is of crucial importance. The software operator is not the only role that should be collecting value—whether in the form of a real-world currency, an incentive that is specific to the platform, or both.
By the end of this paper, you will understand the problems that have manifested in today’s largest social software platforms. You will understand possible solutions to those issues and have a good idea of how they can be effectively implemented on a social software platform. There may even be merit to these ideas in relation to real-world governments.
Creating sustainable software economies is a complex challenge. Let’s begin by examining what makes a software economy sustainable.
Sustainability can be achieved when the operation of the software economy does not require compromise its core purpose.
Consider the placement of advertisements in the sidebar of a DIY home repair instruction website. If the ads are related to the site's content, they may be genuinely valuable for a reader to be exposed to them. We generally accept this type of advertising because it fits fairly well with the purpose of the site. The reader can choose to look at the ads or simply continue reading.
Now consider the placement of advertisements in the middle of streaming media. When listening to streaming music, advertisements are disruptive to the purpose of the software. The listener does not have the ability to hear the ad in the context of the song and choose when to pay attention to it. Furthermore, the experience of time-based content such as music, podcasts, and videos is deeply altered by introducing unexpected, unrelated content that interrupts the flow.
In this case, the operation of the software economy has compromised its core purpose. Naturally, most users in this situation find the ads intolerable. This experience produces an unresolved tension between the software operator and listeners. Without the listeners, the ads have no value. Without the ads, the operator may not be able to capture enough value to provide the platform.
There are several other indicators of a sustainable software economy. The following list represents strong themes of sustainability.
The core purpose of the software is uncompromised
Equitable distribution of value
Flow of value is a virtuous circle, not a vicious circle
Adjusts quickly to the needs of the user community
Contains mechanisms for balancing and rebalancing value flow
Strong alignment of interests of the various roles
Once a software economy exemplifies these principles, it will be well on its way to total sustainability. Let’s look at one of the most critical principles in more detail.
Before we get started, we must first understand tangible and abstract value.
All activities have positive or negative impacts on the environment in which they occur, whether in the real world or an online community.
Any activity that positively impacts its environment can be said to generate abstract value: value that has not been captured and translated into something tangible. Once a value has been translated into an objectively quantifiable form, it can be considered tangible, such as a currency, good, or service. Today, abstract value is rarely accurately recognized, captured, and translated into tangible value for those who create it.
In the real world, you most likely participate in far more value generation than you are meaningfully rewarded. As a participant in the economy, you may see billboard advertisements, shop at a grocery store, play with friends at a park, and listen to a musician playing on the street. These activities add value to the economy, but you do not receive any value other than what you capture from the visual information of the advertisement: the knowledge of a good or service or in some cases, enjoyment. In order to capture a larger piece of the value you bring to an economy, you must typically perform other activities, such as working a job to collect a wage.
Let’s take a look at how value is commonly handled in virtual environments.
To illustrate the value distribution in existing software economies, we will use a fictional case study of a social media platform called “Broadcast”. Broadcast is a video streaming network in which all participants are sectioned into the following categories:
To start, we will observe a simple system in which the platform requires no resources to operate. In this experiment, creators make valuable content that attracts consumers to the platform.
Through watching the content, consumers provide value to the creators by pushing quality content to the top of Broadcast’s algorithm and providing positive feedback to the creators, motivating them to keep creating.
A positive feedback loop is formed.
From this two-way exchange of value, it is understood that even Broadcast is a simple economy with the goal of matching users with creators as efficiently as possible. When you introduce ads into the system as a means of generating revenue, the system loses efficiency.
Since a user's time is valuable, the introduction of ads reduces the total value that a consumer receives on the platform in a given amount of time. There are a number of side effects of utilizing a method of capturing value that is out of alignment with user interests.
There are many examples of value capture methods that rely on questionable mechanisms such as needlessly inhibiting the functionality of software, or bombarding consumers with unexpected paywalls. In this paper, we will focus on advertisements. Introducing ads into a platform will generally have the following detrimental effects:
Distracting from consumption of valuable content
Raising the skill level and time commitment required for creators to succeed
Increasing the effort required to create content
Intrusion of users’ private space with targeted ads
Producing an unreliable and inequitable method of distributing value to creators
We can refer to the sum of these detrimental factors as friction. Friction in a software economy slows the system, and users, down.
Imagine you are in a car driving from point A to point B. Rather than a straight path to your destinaiton you are stopped several times by pedestirans, street vendors, traffic lights and construction detours. Now, your simple journey has turned into something more complex and you are frustrated. All of these factors delay you from reaching your goal, which is exactly what unnecessary ads do to the user experience.
Platforms introduce monetization techniques to capture the value they need to operate and collect their deserved profit for hosting, creating, maintaining, and growing their platform. However, the core issue, which we will attempt to address, is that forcing advertisements into a platform that compromises its core purpose and reduces value for users is not the only option.
To better understand how we may create a more frictionless value capture method, let’s first understand how value is attributed to activities today.
One of the reasons that software platforms today do not implement a more sustainable and equitable distribution of value is because attributing value accurately to activities is difficult and platforms have been able to satisfy their shareholders without such an application of technology. Thus, activity tracking is almost always used to produce user analytics and improve the software. It is not used to calculate how much value is being generated in a system for the purposes of allocating said value fairly to those who create it.
In real-world economics, it is commonly understood that an economy's participants produce value with everyday behaviors such as shopping at a grocery store or glancing at a billboard advertisement. Not only are these activities incredibly difficult to track, but they are also quite personal. That type of tracking would likely feel very invasive. But since software platforms already gather similar information, opportunities for fair and efficient value capture are at our fingertips. A sustainable social software platform would automatically and 100% anonymously track this information.
Because the system's operation consists entirely of information being exchanged, user activity in a software system is already inherently tracked. There is no user activity that does not involve some amount of data flowing through the system. Thus, incidental information— known as actual content—can be ignored and an anonymous record of activity kept. This can be used to attribute value.
In most circumstances where value is being attributed in existing systems, it is in the form of content. One example of this is social media posts.. A post can be used as a placeholder for value creation and in many cases, is rewarded in some way. For example, receiving a like on a post creates a small reward for the user in the form of dopamine. These micro releases can addict users to social media over time.
Let’s return to our fictional case study of Broadcast to understand this fully.
Broadcast designed its platform so that when creators post videos, they are rewarded with likes and views from users. The value generated from watching the videos is translated into knowledge for the users and popularity for the creators. If the function is liking a photo on a user’s timeline, then the value is translated to dopamine, or social status, for the user who posted the video. Value is transferred no matter what, and what can be controlled is what that abstract value is transferred into.
However, tracking content such as a social media post is not enough to accurately represent the value created in a complex social software platform. Doing so only tracks one artifact of value creation, not all value creation across the platform itself.
Since users’ actions are already tracked in a way that is invasive and personal, Spriteley introduces anonymous tracking and data storage. This contradicting method of gathering information still attributes users’ participation value but doen’t compromise their privacy on the platform.
This allows a value measurement that more closely aligns with the wide variety of roles and activities that make the system work sustainably. From this point, a platform must then attribute the proper value to each activity to incentivize those that contribute positively to the system as a whole. Before we look at that concept, let’s understand how activity can be safely, anonymously, and efficiently measured even in a highly complex system.
The general concept that we need to track activity is “proof of use”. If we can reliably gather information about the usage of a software system in a way that all users accept and trust, then we can attribute value and distribute that value sustainably. This comes with significant challenges.
Once we assign and distribute actual value based on activity, there is an incentive to manipulate the data of the activity. When a system tracks user activity for purposes of improving the product or generating business reports, there is little incentive for users to alter the data. Users would spend effort manipulating the system with no reward other than the personal satisfaction of causing mischief. There are several attempts to manipulate the user when they benefit.
Proof of Use is a trustless method of attributing value. This means that no user or role in the system can manipulate the attribution of value without the agreement of a majority of others. To make this happen, we must be able to reach the following conclusions for all activity:
We know that the user is real
We know that the activity is real
We know that the activity date and time are accurate
We know that the quantity of activity is accurate
We know that the activity represents value creation in the system
Now that we understand what proof of use is let’s look at the anatomy of a Proof of Use system.
A node is anything in a proof of use system that transmits and validates activity on the platform.
All users are attached to an individual node, called a user node, but not all nodes contain users. Non-user nodes are called passive nodes. These can be places or items.
User nodes contain the following node elements:
Passive nodes contain only transmitters.
In proof of use, the term “user” refers to the human interfacing with the system. Each user is like a spaceship pilot– they make the higher-level decisions, and the ship (node) handles complementary tasks behind the scenes.
Transmitters are autonomous node elements responsible for submitting activity information on behalf of the node and validating the activity of other nearby users. Such activity might include: where, when, and with whom a user interacts or how long they have interacted with sponsored content. Transmitters are also responsible for validating the activity of other nearby users on the system.
Transmitters are the only node elements present in both user and passive nodes.
It is up to the platform operator to design a system in which enough nodes are located throughout their system that activity is accurately and efficiently submitted and verified without creating computational excess, as we see with proof of work systems today.
Now that we understand the key terms let’s look at how Proof of Use systems put nodes to use to create a vastly improved social software platform.
If value in any system can be reliably and fairly attributed to activities, then a token can be produced that incorporates that value.
The Spriteley cryptocurrency is called User Coin. We can produce and distribute User Coin based on usage of our software platform. Ideally, User Coin is the currency used in the software platform such that its generation facilitates further usage of the platform and, therefore, further value creation.
By generating User Coin based on value-generating activity, it becomes possible to distribute value in the system to both compensate and facilitate further value creation, as well as exchanging it directly. Accurately translating value into coins, which have tangible value within the system and on the open market, and distributing them proportionally directly to the users who created the value is the most direct and empowering method for value translation. Furthermore, the User Coin in the system itself ensures that it has an inherent value and, therefore, compensation for a user’s activity does not erode.
User Coin is generated whenever a certain amount of verified activity points are accumulated in the system. This can occur automatically whenever activity reaches a given threshold or on a pulse caused by the codes changing.
A whole User Coin does not need to be created. It is possible to create any portion of a User Coin to accurately match the abstract value created in the system that has not yet been turned into a User Coin.
User nodes can be given a score to indicate how well they align with the software economy's sustainable operation. The sustainability score will also determine the percentage of the total value created by a given user that they earn back in the form of User Coin.
A user who participates sustainably in the software system will receive a more significant percentage of value than a user who does the same activity but has a lower sustainability score overall.
The sustainability score must meet the following conditions:
Be calculated from the user’s behaviors directly
Be calculated and agreed upon by the other nodes
Be predictable in terms of what it will be and how it is calculated
Represent alignment with the best interests of the whole user community
Impact outcomes for the user to produce the incentives and disincentives needed in the software economy
The sustainability score should have a maximum value, such as 100,
which represents temporary complete
alignment with the sustainable operations of the system. The reason that the score should have a limit is because it represents alignment. It is not a competitive score, and it does not represent the value attributed to that user. It does, however, impact the attribution of value. A maximum score of 100 can be treated as 100% alignment. A score of zero represents no alignment. By multiplying the alignment score with the signed activity submitted, value attribution quantities can be adjusted to fit the current alignment the user has with the greater good of the system.
The sustainability score should drop over time like a balloon bumped into the air and slowly falls. A small amount of aligned behavior is sufficient to keep the score all the way up, but seeing no behavior over time represents an unknown perspective on the alignment of the user. Therefore it should drop eventually without any activity. The rate at which it drops and whether or not an inactive user fully reaches a score of zero are implementation details that may vary from one system to another.
A user’s sustainability score can rise from having activity submissions countersigned by users with a high score. Users with a high score should not have their score reduced by countersigning valid activity submissions from users with a lower score. All users should have their scores reduced from countersigning invalid activity submissions.
We already have technology suitable for creating user identities that allow verification while maintaining anonymity. The public and private key technology that powers secure websites, e-commerce transactions, and cryptocurrency wallets are perfectly adequate for a Proof of Use system.
Each user node in a system can have public and private keys. The private key can be generated by the user and used to sign activity. The public key is used to verify the user’s existence in the system and can be registered according to a set of rules and processes.
This is very similar to the standard practice of users signing up for a software account using their email address and verifying ownership of that address by clicking on an emailed link with a special code. This approach lets software systems know that a new user is joining and performing onboarding steps, even though the user has never shared the private information used to add and receive value.
This mechanism further empowers platforms to treat decentralization as a spectrum as opposed to a hard line. Systems can be built in which no sensitive user information is generated centrally. However, a centralized part of the system, such as a user notification server, can be given enough information to perform its function.
These genuine user identities are inherently anonymous. Real-world user information such as a name and email address can be added on top of the unknown identity, but this is optional and can vary from system to system. Furthermore, the identities are verifiable, which is required under the Proof of Use paradigm.
In order to verify when an activity occurred and to verify that the activity is genuine, every software system needs a mechanism to guarantee “time and place.” The time must be universal to the system, and the place needs to represent the situation in which the activity is happening— this could be a virtual library or something more abstract like a user’s timeline.
A rolling set of unpredictable codes linked back to specific periods can be generated, much like a two-factor authentication app. The codes can be broadcast to all online user nodes and change at any fixed interval, such as every 30 seconds. To account for network lag and other issues, the platform may even choose to consider multiple codes as valid at one time, such as the last three or last five codes.
Because the codes are randomly generated, by including them in an activity signature, the activity can be considered accurate in terms of time. There is no other way to associate an activity with a particular date and time except for a user node to sign it with the code for that date and time in the system and submit it within the allowed time window.
The concept of place is more complex but operates on the same principle. Imagine that you are asked to go to the library and are later asked to prove how long you were there. We already have the time verification figured out above. You would just record the time codes for when you arrive and leave. But how do you prove that you were “there” at the library during that time window?
If the library contained some special code that you could only discover if you visited, it would be possible to prove that you were there. The only problem with this approach is that the library code is something that you could record and use every time you were asked to verify your activity. Therefore your proof is only verifiable the first time you go. You can prove that you’ve been at least once but nothing more detailed than that.
Because we are emitting unpredictable codes on a set time interval, we can empower the library to produce a rolling visitation code that requires you to be there at the time it was emitted to prove that you were there. This mechanism would successfully allow a user node to prove that it was at the library at a specific time.
Just like with user nodes, the library, which is a passive node, can also be matched with a public and private key set.
The public key of the library can be distributed to everyone and used to verify activities that occur in the library. Suppose the library signs the current time code as it is emitted. In that case, the library is effectively emitting its own special code that aligns with time but can only be known by interacting in or with the library.
To further establish the validity and versatility of the Proof of Use paradigm, let’s see how this system might function when applied to today’s internet. Instead of the library scenario, let’s imagine that we are talking about pages of an article that a user claims they read from 4:13 pm to 4:38 pm.
If the server which provides the web pages of the article is a passive node, it can also emit special codes for each page which change as time passes, so a user node could prove that it actually read particular pages at particular times. The user node would include the time and article page codes in their signed activity. Again, the user is not required to do any additional tasks; the user node containing that user carries out all these tasks autonomously from user involvement, similar to leaving footprints in the dirt just by walking to your destination.
Following this approach, anything in a Proof of Use software system can be a passive node, emitting codes at a set interval in accordance with the universal clock. This can be applied to social media posts, advertisements, video segments, photo downloads, virtual libraries, and anything else transmitted digitally from a source to a user node.
In a Proof of Use system, the term bad actor refers to an individual or group who attempts to hack or in some way compromise the system. We can expect that the primary motivation for bad actors is to get more coins or other forms of quantifiable value than they are entitled to based on their activity in the system.
In any system that assigns and distributes value, there will be attempts to coerce the system into attributing that value inequitably. It is naive to think that a system can be built which is impervious to bad actors, but it is also irresponsible not to make the best effort possible regarding prevention and correction.
Some techniques for reducing bad actors are as follows:
Make the effort required to trick the system very high
Make the payoff for bad behavior meager relative to good behavior
Make the effort required to trick the system non-repeatable
Make the penalties substantial for getting caught doing bad behavior
Require multiple bad actors to work together in order for any harm to be possible
Maintain a healthy ratio of good actors to bad actors such the system can never be taken over by bad actors
Examples of these techniques are found in the remainder of this paper. It is important to accept that there will be some unwanted activity in every system. Keeping it to an acceptable minimum is a more reasonable goal than trying to eliminate it completely.
Given the Proof of Use structure outlined above, bad actors will likely attempt to sign and submit activities they did not actually do. They may also try to automate account creation and activity generation, submit other user’s activity as their own, and create rogue nodes that refuse to countersign the activity of others. Some of these behaviors can be reduced by aligning interests.
We have solved for knowing that a user node is real, knowing that an activity is real, and knowing that the date and time is accurate. Now we need to know that the quantity of the activity is accurate. Doing so requires us to go beyond the signed activity itself and look at chains of activity as they are being created.
If we are going to attribute value to signed activities under Proof of Use, then we need to ensure that the quantity of activities is not overstated, even if they are valid. This attribution prevents bad actors from gaming the system by repeating activities repeatedly. This also simultaneously disincentivizes well-meaning users from being encouraged to spend their valuable time staring at the same advertisement over and over again, for example. If we limit the rate at which activities can be counted toward the benefit of a user, then we accomplish this goal. The challenge is how to enforce the rate limit.
One simple method is to limit every activity to a single signed event per user node per time code. A system with a 30-second time clock would mean that each user node is allowed to sign and submit a particular activity once every 30 seconds. The value attributed to that activity can simply be based on the number of time code periods over which the full activity accrues. Reading an article that takes about four minutes should allow for one-eighth of the total value to be assigned per activity signature, for example.
Additionally, we need to limit the total amount of value attributed to an activity within a time period. It is valuable to read an article multiple times, and maybe even twice in a row, but reading the same article seventeen times in an hour can only be an intense study, or it falls into the category of value attribution manipulation. Depending on how the system operates, it may be up to the system engineers operators to choose the appropriate amount of time that a user should spend reading an article or interacting with content in general, or in a democratized approach, value attribution may be democratized through a decentralized autonomous organization (DAO).
In the case that it is indeed a well-meaning user who is studying the article, they will be rewarded at first for signing activity, which is verified by the passive node that is the article and further authenticated by other user nodes on the network. As they continue to read the same article over and over again, they are no longer attributed value. But isn’t their studying the article a valuable activity? To the system, it’s not more valuable than someone simply reading the article. However, the user is welcome to continue gaining value from the article by learning from its contents. Further, their knowledge may allow them to contribute to the platform or participate in a highly valued activity for which they will be accurately compensated.
There are multiple scenarios in a Proof of Use system, which at first seem problematic but, upon further study, result in a fair and proportional distribution of value. We encourage you to examine your own hypothetical implementations and explore scenarios in which the Proof of Use model may still need improvement. In the next section, let’s consider some of those scenarios together.
Requiring nodes to countersign the activity of other nodes is a strong mechanism for creating alignment. It must be tempered by forming creating the right incentives, or users will opt for behaviors that serve themselves and not those that uphold a sustainable software economy.
An invalid activity submission does not follow the system's rules, such as exceeding rate limits or including incorrect Proof of Use codes. A user node signs and submits its activity. Other user and passive nodes accept that activity submission by verifying and countersigning it. Because value is attributed to activities, when an activity submission is countersigned successfully, it serves as registration of that activity in the whole system, and value is attributed to the user node that submitted it.
Signing an invalid event must be disincentivized. This means that any node that signs or countersigns an invalid event is penalized. This assures that collusion between multiple users will result in mutual destruction.
Similarly, refusing to sign valid events must also be disincentivized. Doing so helps everyone in the system because rogue nodes that avoid participating in the processes of the sustainable software economy will be penalized, ensuring constant sufficient participation by those with no ill intent.
In a software economy, the system's usage indicates currency demand. Ideally, the availability of currency will match the demand. The User Coin concept represents the value created in the software economy, which is an excellent indicator of demand for a currency that allows value to be exchanged.
Because User Coin is created from Proof of Use, increasing the supply of User Coin will naturally speed up or slow down depending on the recent system usage.
In addition to timing the release of User Coin to match demand, it is also essential to limit the total supply of User Coin to represent the software economy at full capacity. The amount of User Coin created for a given amount of verified activity points should drop periodically or smoothly per the number of users on the platform. The passage of time itself is not a reason to reduce the amount of User Coin being created for the same amount of activity. The amount of time it will take for a software economy to reach its full scale cannot be known in advance, therefore a metric other than time must be used to control the supply of User Coin.
The amount of User Coin already created is an excellent metric for determining how much to create going forward. If the drop is scheduled periodically, then the quantities at which a drop will occur should be published transparently for all participants in the software economy.
If properly planned, a software economy should be able to approximate the number of users at which the User Coin creation reaches negligible amounts. A software economy may, for example, plan its User Coin reduction schedule to stop creating any substantial quantity of User Coin when it reaches the activity level of about 5 billion users. This can be a very effective way of ensuring that the software economy has sufficient value exchange currency for a variety of participants while also identifying the scale at which it expects to mature.
User Coin must be tradable on secondary crypto and fiat currency markets. Again, as the representation of value created in the software economy, it must be able to translate into forms of value exchange that allow participants to make use of the value.
The value at which a given User Coin trades with other currencies will likely be volatile. For participants who need to exchange their User Coin for other currencies frequently, this may pose a challenge. It is our hope that innovative financial instruments may develop to account for these market fluctuations. However, prices within the software economy should be able to remain stable, which will assist in maintaining reliable value for the participants.
If, for example, a vendor that accepts USD opens a shop in a virtual world using the proof of use system where they pay virtual rent in the currency of the software economy, an indirect link is made between User Coin and USD. This is just one of the countless ways the currency may stabilize in relation to other stable markers of value.
Value creation in the software economy should result in value distribution to those who created it. In a Proof of Use system, the total abstract value created by a user is translated into User Coin. The User Coin is then divided between the user and the community fund according to the user’s sustainability score. A higher sustainability score means a higher share of tangible value in the form of User Coin.
Activities with a higher score represent higher value creation. They also represent a higher value payout to the user nodes signing and submitting those activities. This approach can be used to automate the rewarding of activities that the user community finds valuable. One example is streaming a user’s live music performance, which can be automatically submitted by the artist’s user node and signed by every unique listener, as well as the place in which they are performing since the live music adds value to the space itself. This would result in automatic compensation for the performer, who has created value for the other users. The total reliance on tips or ticket sales as the only means of value capture for that artist may be eliminated by this model.
Because every user node that submits verified activity will be accurately and proportionally compensated in User Coin, they are supplied with the means to participate further in the software economy. This virtuous circle of value exchange can dramatically reduce the disparity between affluent users and those who do not have the financial means to participate more fully in the software economy. We believe that a more diverse set of participants in a software economy creates greater value for all involved.
The amount of points assigned to different activities in the software economy can be adjusted over time. This allows the system to balance and rebalance to fit the user community's needs. Activity point quantities should be published clearly and should not be so volatile as to cause confusion or uncertainty within the user community.
A number of methods can be used to assign point values to activities. One approach is for the software operator to simply decide and publish new point values regularly. Another method is for the user community to create their own incentivization structure through a DAO. A third option is to set the point values according to prior activity to create an automatic rebalancing effect or an amplifying effect for trending activities. There is ample room for innovation regarding this subject.
Not all User Coin can be distributed to users with verified activity points. That would ignore the value creation of the software system operator, which is vital and substantial. A portion of the User Coin being created should be paid out to users with activity points, with the remainder paid to the holders of a set of unique tokens.
The number of tokens associated with a software economy can be set from the start and remain fixed. In this mode, the tokens represent ownership of the software system.
The User Coin that remains after paying users with activity points and capitalizing the community fund should be split evenly amongst all tokens. Any number of persons or organizations can hold tokens. Those who hold tokens will receive User Coin according to how many tokens they own whenever User Coin is created.
If the software operator holds a substantial number of tokens, it will capture value aligned with how much the system is being used. This should allow the software operator to continue its operations, enhancing the platform and serving the user community.
Given the above mechanism of special tokens, the development and launch of a software economy can be funded directly by the user community and other interested parties. A portion of the special tokens can be offered for sale before the software system is completed, resulting in funding to construct the platform.
Under traditional software platform funding models, ownership in the platform’s company is purchased by investors. As the software platform finds ways to capture value through monetization, the owners of the company receive that value. Usually, the number of investors is a tiny fraction of the total number of users.
When a large number of future users can fund the development of a software economy, there is immediate alignment between those creating value in the system and those receiving it. The users willing to take a risk and help establish a software economy will benefit significantly from helping it grow. If a large quantity of these founding users purchases tokens, then an initial user base for the software platform has already been established and can help it develop past launch.
A software economy can effectively bootstrap itself through this mechanism. Everyone benefits from the growing usage of the platform, including the users who did not purchase special tokens. The initial secondary market exchange value of a cryptocurrency for a system will likely grow a substantial percentage in its earliest stages, ensuring that the earliest users of the system are likely to gain the most.
To fund software system expansion, the software operator may sell additional special tokens at any time. The market price and offering quantity for a special token offering should be set according to the following guidelines:
Price the token fairly according to the estimated amount of User Coin it will receive
Account for risk in the price, such that the earliest tokens have a lower price but also have a much higher potential gain
Fix the price of the tokens so participation in the offering is fair
Determine a quantity of tokens that is large enough that a single token price is low enough to be accessible to as many potential users as possible
Limit the number of tokens to the funding amount needed so the ongoing operation of the platform is not at risk
Consider offering the tokens in waves wherein each person or organization can only purchase a limited number and therefore a diverse set of potential users gets the chance to participate
For a user to be given the empowerment of consent in interactions, they need to be provided with a mechanism for altering or terminating the interaction they are having with another user at any time. One possibility is to require consent for every interaction, giving users a built-in mechanism for declining consent.
User consent in interactions is an integral part of the sustainability score, meaning that it plays a key role in the operation of the software economy.
One example of declining consent is a user choosing to block or ignore another user who is saying things that are offensive to them. This is like a “downvote” on a content-based platform. The blocking action must result in an altered system experience for both users. The user doing the blocking may cease to see or hear the user they found offensive, and the user being blocked may also cease to see them.
Submitting activity involving a greater number of unique users should increase a user’s sustainability score faster, as this indicates a user interacting with a more diverse audience. This diversity amplifier can be adjusted to reward the various natural interactions associated with standard usage of the platform.
Diversity in user interactions is important for ensuring that the system's sustainability scores truly represent alignment with the community. Because sustainability scores go down over time with no activity, a smaller amount of activity with a more diverse set of users will produce a greater score for that user. This aligns the needs of the whole user community and the system itself with the interests of the individual users.
A lack of diversity in a user’s activity submissions will require a larger amount of activity or submitting higher-value activities to keep the user’s score up. This results in the diversity amplifier helping to reduce long-term echo chambers. Because a user’s score will rise less quickly without diversity, only interacting within an exclusive group will not be in a user’s best interest. Additionally, the lower scores of the users in an exclusive group will result in even less score increase, leading to the eventual collapse of the group’s scores. Additional activities and the inclusion of new group members can keep the scores of everyone aloft.
Note that a user who only interacts with pre-existing friends on the system could boost their sustainability score through other means of creating value for the system, such as building key infrastructure, participating in community reviews, and countless more. It is up to each system to choose how and to which activities value is attributed to reach their own version of sustainability.
However, new interactions between users play a fundamental role in strengthening the platform by strengthening the individual connections between the users. By promoting this activity, we can create a robust and socially sustainable platform with a stronger community fabric than today’s social media platforms.
Today’s biggest social media platforms scaled quickly before they were able to reach a sustainable approach to moderation or a value capture method that aligned their interests with the interests of their users.
Their growth was never tied proportionally to the culture of their platform, nor their ability to moderate the platform in a sustainable manner. Now they must censor users on their platform from the perspective of a centralized corporation, misaligned with, or at best, distant from the interests of their users. Meanwhile, forums that do employ a form of decentralized moderation have no means by which to tangibly compensate the thousands of moderators making the platform sustainable.
It is clear that innovation is needed in the space of sustainable platform moderation.
Perhaps nothing could have a greater impact for a platform than understanding, strengthening, and empowering its early user community before scaling up rapidly. A Proof of Use system would have a distinct advantage in doing just that.
Because a Proof of Use system is able to attribute values directly to activities, it can incentivize sustainable activities and disincentivize unsustainable ones. One activity that should be highly incentivized is platform moderation. However, not everyone can be a platform moderator. It takes specific skills and training, as well as a commitment of time and effort on the part of the moderator. Therefore, the education of users on moderation techniques and guidelines must also be among the most highly incentivized of any activity on the platform.
Through this form of incentivization via tangible value, a Proof of Use platform would be able to foster a disproportionately large number of moderators at an early stage compared to a traditional social platform. Ironically, this may allow a Proof of Use system to scale faster than a traditional one.
Just like the implementation of Proof of Use itself, it is easier and more effective to build a social platform with a heavy emphasis on the culture and empowerment of its user base from the platform's inception. A responsible platform operator must scale only at the rate that they can keep the platform healthy until a sustainable and robust method for guaranteeing the health of the platform through decentralized mechanisms is put in place. Once this has been achieved, the platform may grow boundlessly.
Furthermore, it is not just the learning of platform moderation techniques that may be highly rewarded but the learning of any knowledge related to the sustainability of the platform.
Once again, a Proof of Use system could potentially hit far above its weight, not just in tangible value delivered through User Coin but in abstract value in the form of knowledge of social sustainability. It is our belief that a well-implemented Proof of Use platform may serve as a beacon of hope for virtual worlds and real ones alike.
The details of the software platform and software economy should be openly available to everyone. The software operator should make it easy to understand and easy for users to keep up to date with any changes.
Participation in the entire software economy process outlined in this paper should be effortless for a user to perform behaviors that are aligned with the creation of community value. All of these steps should happen automatically as a well-intentioned user simply goes about their business using the system. They only need to use the default-sanctioned software in order to behave in a way that is aligned with the community.
Attempts to manipulate the system should be so difficult that they require that new code be written, which is an effort that is not worthwhile for the average user who may be merely considering an attempt to cheat the system.
When value creation is attributed properly and value distribution is equitable, the software operator should be able to capture value in a way that does not introduce friction, and therefore more value is available for all participants.
Different outcomes can be arranged for different situations. A software operator can be commercial and seek a tidy profit. A software operator can also be a non-profit organization. The software itself can be open source or proprietary.
When a software operator is paying attention to the needs and desires of the user community and operating the platform reliably and effectively, the reward for taking on this immensely challenging role should be commensurate. Amazing value creation from the software operator should result in a high payout, but it is imperative to note that the payout for all participants in the software economy will also be high in this scenario. This is sustainable.
It may be difficult for existing systems to add the concepts featured in this paper to their ongoing operations, although it certainly can be done. We would like to see it accomplished across a large number of systems. However, it will be much easier to align user behavior with sustainability when the system is designed this way from the start.
Using these concepts could help resolve the misalignment of interests found in some systems. It may also serve as a way to learn how to incorporate the user community's needs into platform-building more reliably. It is an approach to capturing value that empowers those creating the value, including the software operator. The friction of an ill-fitting value capture approach reduces the value to all participants and can be compared to the loss of energy as heat in a mechanical system with significant friction.
In this paper, we have outlined a user ownership model that reflects participation in value creation more than privilege. We regard this as another important attribute of a sustainable software economy.
Work is necessary along many lines in order to construct a full implementation of this approach. Contributions from experts in a variety of disciplines will be highly valuable. Several iterations of test systems may be built and expose issues that need to be addressed. However, even a flawed implementation should provide incredible value in the form of learning for the entire world of software, if nothing else.