A three-sided network effect between authors, institutions, and #journals dominates the current #research #publication industry. Institutions and authors want to publish in well-known journals, while journals want to publish works of well-known authors (and institutions) who have published in well-known journals. This can lead to self-fulfilling #prophecies. #PaperScore aims to break this outdated ecosystem by completely #decentralizing the #peerreview process. Moreover, the current ecosystem allows #editors to select #reviewers or reject a submission without any peer review. They also make the final decision on each submission. But like all people, editors are also susceptible to #biases. As a result, journals are subject to a number of #editorial biases. Such biases are reinforced if the editors are always succeeded by those who have #published in the journal before. #scientificresearch #researchpublication #publications #revolutionary
关于我们
PaperScore is the first Decentralized Academic Journal and publication platform that replaces the editorial board with a collective intelligence algorithm. This algorithm is implemented in a computer code and controls the review process. PaperScore accepts manuscripts from all disciplines, particularly from interdisciplinary and multidisciplinary studies. The current research publication industry is locked into a three-sided network effect between authors, institutions, and journals. Institutions and authors want to publish in well-known journals, while journals want to publish works of well-known authors and institutions who have published in well-known journals. This can lead to self-fulfilling prophecies. Furthermore, many valuable research findings and studies are not published due to some editorial biases such as Significance Bias and Discipline Bias. PaperScore provides a home for such homeless studies and papers. PaperScore aims to break this obsolete ecosystem by completely decentralizing the review process. Once an author submits a manuscript to PaperScore, the backend code randomly selects up to five potential reviewers based on the manuscript’s keywords, citations, and referrals. Each potential reviewer receives an email and decides whether or not to evaluate the manuscript. Reviewers also have the option to refer manuscripts to other potential reviewers they deem to be experts. The next person will also have the same option. Based on the small-world phenomenon, each chain will get to a suitable reviewer in fewer than six referrals on average. Most likely, it will be fewer than three referrals if there are proper incentives and a purposeful randomization formula incorporates relevant information.
- 网站
-
https://paperscore.org/
PaperScore的外部链接
- 所属行业
- 图书期刊出版业
- 规模
- 201-500 人
- 总部
- Austin,TX
- 类型
- 私人持股
- 创立
- 2020
- 领域
- Articles、Review、Publication、Journal、Decentralization、Science、Research、Collective Intelligence、Paper、Peer-Review、Authorship、Scoring、Discovery、Scientific Research、Scholars、Research Manuscript、Scientific Articles、Research Paper和Reviewers
地点
-
主要
US,TX,Austin,78744
PaperScore员工
动态
-
PaperScore转发了
The corporation can be an efficient way of organizing business enterprises by serving as a nexus of contracts that simplifies the contracting problem. Each stakeholder can contract directly with the corporation, rather than with each of the other stakeholders. Corporations allow for the separation of ownership (residual risk-bearing) and control (decision-making) so that investors and managers can specialize. Moreover, residual claimants can reach the unity of decision by delegating the decision-making authority to one decision-maker (or a few). Otherwise, it would be too costly for many residual claimants to participate in all decision processes (Fama & Jensen, 1983). Fama and Jensen (1983) outlined four steps in organizational decision processes (i.e. governance): (1) Initiation or suggestion of decision choices and actions, (2) ratification or selection of a decision choice out of the suggested initiatives, (3) implementation of the ratified decision and (4) monitoring the execution of the ratified decision. 1983, E. F. Fama & M. C. Jensen, “Separation of Ownership and Control,” Journal of Law & Economics. #corporations #decentralization #DACurve #DAC #decentralizedfinance
-
PaperScore转发了
In multiple linear regression, high-dimensional data can pose challenges such as multicollinearity, unstable estimates, overfitting, and poor out-of-sample performance. Several methods can reduce dimensionality, address overfitting, and handle multicollinearity. In a recent post (https://lnkd.in/eeqETd6e), I compared Stepwise Regression, Genetic Algorithm, and Lasso Regularization to address overfitting and high-dimensionality. Here, I further compare Stepwise Regression, Principal Component Analysis (PCA), and Partial Least Squares (PLS) to reduce dimensions. Stepwise Regression automatically selects the best predictors based on some statistical criteria. Its coefficients are easier to interpret but may become unstable with highly correlated predictors. Principal Component Analysis (PCA) transforms the (potentially correlated) predictors into orthogonal components, thereby eliminating any possibility of multicollinearity. Then we can regress the response variable on the most important components, and reduce dimensionality while preserving most of the variance in the predictors. PCA creates components based only on the variance in predictors, ignoring the response variable and its potential relationships with the predictors. Therefore, PCA may perform poorly when the most relevant predictors contribute less to the variance of the most important components. On the other hand, the Partial Least Squares (PLS) method optimizes components to capture the maximum covariance between predictors and the response while handling multicollinearity and dimensionality reduction. Therefore it is expected to have more predictive power than PCA-based regression. Both PCA and PLS require tuning the number of components in the model, and in both cases, the component coefficients are hard to interpret directly. But we may multiply the component coefficients and their loadings to obtain coefficients on the original variables and interpret them as usual. I evaluated and compared these three methods based on their Out-of-Sample (OOS) performance. First, I applied each method to a simulated dataset and averaged its results across 10 folds of cross-validation. The Stepwise Regression performed better than the other methods at their optimal levels. Here is the R code:? https://lnkd.in/erK3pi4g Then I used real-world financial data to predict the weekly return of Tesla's stock price. To obtain robust estimates of OOS performance for each method, I used 10 iterations of Rolling-Forward time-series cross-validation and averaged the results across the 10 splits. The best predictive performance belongs to PLS with 23 components. Here is the R code:? https://lnkd.in/egJ3GV-f As you can see, the performance of each method highly depends on the situation and the problem at hand. You need to compare the outcomes of different methods in each situation. You can use these R codes to compare different methods for your specific problem.? https://lnkd.in/eSf32yct _
-
PaperScore转发了
Yes, blockchain is an append-only database, but no, we cannot simply build one with SQL. However, for educational purposes, I designed a relational database that imitates/simulates the Bitcoin Blockchain to some extent. It has four main tables: Accounts: This table stores the public keys of senders and receivers involved in transactions. It does not store balances but the database calculates balances dynamically by summing up UTXOs (unspent transaction outputs) from the Parties table. If we also store the balance of each account, it becomes more like the Ethereum blockchain. Transactions: This table records the value transfers between accounts. To mimic Bitcoin, each transaction can have multiple senders and multiple receivers. Thus, it has a many-to-many relationship with the Accounts table. Parties: This is the bridge table that establishes the many-to-many relationship between Transactions and Accounts. To track each party’s gain from and contribution to each transaction, each record has an amount (UTXO) that is positive for receivers and negative for senders. The sum of inflows can be more than the sum of outflows. The extra amount is the fee (tip) that goes to the miner. Blocks: Each block can have one miner and multiple transactions. The blocks are added one at a time and each block points to the previous block via the block hash. ------------ We also need to make the database immutable and tamper-proof. While anybody can read (SELECT) information from any table, the UPDATE and DELETE access are revoked for everyone, making the database append-only.?? REVOKE UPDATE, DELETE ON Blocks FROM PUBLIC; -- Repeat for every table. People can only add new data (INSERT) into the database. But, to prevent invalid entries, INSERT into the Transactions and Blocks tables are possible only via the stored procedures that enforce the blockchain protocol rules. Like Bitcoin, each new block is added by the user (the miner) who finds a valid nonce through a computationally intensive process (proof of work). The blockchain protocol specifies the rules that control how and what new data is added to the blockchain. It determines how and who judges the validity of the new entries. In short, a blockchain protocol is a constitution in computer code as explained in the article?"Blockchain: The Quiet Revolution" (https://lnkd.in/efD2FkV6) Here is the complete code for our blockchain database: https://lnkd.in/e7JZHCPi Except it is not complete! It lacks some minor details, such as decentralization, network propagation, cryptographic validation, and millions of identical duplicates worldwide! Such features are not trivial to implement in a traditional SQL database. So don’t try this at home! If you have an idea to improve this code, feel free to fork this repository, clone it, make changes, test your changes locally, and then open a PR. Please explain your reasons in the PR. #blockchain #database #bitcoin #crypto #cryptocurrency #paralead #SQL _
-
PaperScore转发了
Here, I introduce the Conceptual and Mathematical Model behind ParaLead. #corporategovernance #DAC #DAO #Governance
The Introduction to ParaLead
https://www.youtube.com/
-
This article explores how integrating blockchain technology can revolutionize the education sector. It highlights the key features of blockchain, such as immutability, reliability, transparency, and trust, which address long-standing challenges in record-keeping, evaluations, and digital certification. Blockchain provides a secure, decentralized, and efficient way to manage student records, ensure transparent assessments, issue digital certificates, and streamline university admissions. As e-learning expands, blockchain offers a solution to safeguard data integrity and improve educational administrative processes. #Blockchain #Education #EdTech #DigitalTransformation #Elearning #Innovation
-
PaperScore转发了
While correlation is symmetric, regression is not, even with one predictor. Let’s consider regressing y on x:? y =? ?? + ??.x + ?? And regressing x on y:?? x = ?? + ??.y + ?? Since? ?? = Cov(x,y) / Var(x) , and? ?? = Cov(x,y) / Var(y) ? , hence? ??.?? =? ??2 ? , which means? ?? ≠ 1/??? unless x and y are perfectly correlated ( ?? = 1). However, the p-values for ?? and ?? are the same. This is because the t-statistic depends only on the correlation (??) and the sample size (n): t = ?? . sqrt(n-2) / sqrt(1- ??2) Moreover, since R2 = ??2 and F = t2 , they also remain unchanged between the regressions. This means that if one regression finds a significant relationship, so will the other. This results in equivalent hypothesis testing outcomes, which allows flexibility in model setup in exploratory research when you are not sure which variable is dependent and which is independent. (By "allows", I mean you may get away with it!) Nevertheless, when adding control or other independent variables, the choice of which variable to regress on the others becomes crucial. The decision is no longer arbitrary because it should be based on which variable’s variance can be explained by the others. This should be determined by the underlying plausible causal relationships and temporal precedence among the variables. Temporal precedence is necessary (but not sufficient) for causal inference. It refers to the logical ordering of events, where the cause must precede the effect. The variables believed to cause the effect should be treated as the independent variables. For example, it would be appropriate to regress income on demographic factors like age and gender to test if demographics can explain the variance in income in a population. That is because demographic characteristics are plausible antecedents (causes) for income. But income cannot plausibly influence a person’s gender unless you are studying particularly expensive surgeries! Here is a code to play with:? https://lnkd.in/e9shWQw7 #regression #datascience #data_analytics #data #machinelearning #causalinference #causation #modeling #correlation #research #RStudio _
-
PaperScore转发了
Some data scientists like to use deep learning to tackle every problem! While Deep Neural Networks are powerful, they also have limitations. For instance, interpreting a trained model can be challenging, whereas linear regression models provide valuable theoretical insights. Often, a well-constructed linear regression model, incorporating interaction terms, transformations, polynomial terms, dummy variables, and lagged predictors, is sufficiently robust for many problems. For example, when predicting Sales for a company, you can include a variety of terms (covariates): AdSpend (advertising costs), Price (treatment), Seasonality (time of year, encoded as dummy variables), LaggedSales (Sales in the last year), AdSpend × ProductQuality, Price/CompetitorPrice, Price2, Log(LaggedSales), etc. ? However, this can lead to hundreds of terms, which can result in overfitting and multicollinearity. To avoid that, we can apply regularization techniques and select the most relevant terms. Consider the multiple linear regression model:?y = X. ?? + ?? X: an n-by-m design matrix containing the training data for ALL the potential covariates including all the predictors, their transformations, interactions, etc. It also includes a column of ones for the intercept. y: an n-by-1 vector of observed outcomes in the training data ??: an n-by-1 vector of errors ??: an m-by-1 vector of coefficients OLS can give us the optimal coefficients ??*, but it is probably overfit. To find the optimal subset of terms in the model, let's define some variables: z: an m-by-1 binary vector indicating which terms are included in the model. So, m(z) = sum(z) is the number of terms in the final regression model. Since we always need an intercept, z[1] = 1. X?? = X[ , z] : an n-by-m(z) design matrix with only the columns where z is 1. ???? : an m(z)-by-1 vector of coefficients for the chosen terms based on z. The new regression model becomes?(1). If we multiply both sides by (2) and assume that the ?? term is negligible, we obtain (3), which is the OLS result, minimizing the residual sum of squares (4). Moreover: T : the test dataset (complete matrix) T?? : the subset of the test data containing only the selected columns based on the vector z y? : the vector of observed outcomes in the test data. And the estimated outcomes are in (5) e : the vector of Out-of-Sample (OOS) residuals (for z) To find the optimal subset of terms (z*), we minimize the OOS Mean Squared Error (6). But since the size of the test set (n') is constant, this BIP problem summarizes everything: (7) This R code solves this problem using different methods: https://lnkd.in/eWJRNjmt In this example, stepwise regression (AIC) resulted in an OOSMSE value of 1.005 after 3 minutes. Lasso, a regularization method that penalizes the OLS objective for complexity, had an OOSMSE value of 1.16 almost instantly. The Genetic Algorithm achieved the lowest OOSMSE value of 0.99, but took about 12 minutes to compute... _
-
I am speechless! ?? #ChatGPT #Autobiography
-
Which jobs do you believe AI will replace in the coming years? #AIResearch #DataScience #PaperScore #ArtificialIntelligence #Singularity #AGI