(Pro)renin receptor (PRR) contributes to regulating many physiological and pathological processes; however, the role of PRR-mediated signaling pathways in myocardial ischemia/reperfusion injury (IRI) remains unclear. In this study, we used an in vitro model of hypoxia/reoxygenation (H/R) to mimic IRI and carried out PRR knockdown by siRNA and PRR overexpression using cDNA in H9c2 cells. Cell proliferation activity was examined by MTT and Cell Counting Kit-8 (CCK-8) assays. Apoptosis-related factors, autophagy markers and β-catenin pathway activity were assessed by real-time PCR and western blotting. After 24 h of hypoxia followed by 2 h of reoxygenation, the expression levels of PRR, LC3B-I/II, Beclin1, cleaved caspase-3, cleaved caspase-9 and Bax were upregulated, suggesting that apoptosis and autophagy were increased in H9c2 cells. Contrary to the effects of PRR downregulation, the overexpression of PRR inhibited proliferation, induced apoptosis, increased the expression of pro-apoptotic factors and autophagy markers, and promoted activation of the β-catenin pathway. Furthermore, all these effects were reversed by treatment with the β-catenin antagonist DKK-1. Thus, we concluded that PRR activation can trigger H/R-induced apoptosis and autophagy in H9c2 cells through the β-catenin signaling pathway, which may provide new therapeutic targets for the prevention and treatment of myocardial IRI.
This paper considers a fuzzy perceptron that has the same topological structure as the conventional linear perceptron. A learning algorithm based on a fuzzy δ rule is proposed for this fuzzy perceptron. The inner operations involved in the working process of this fuzzy perceptron are based on the max-min logical operations rather than conventional multiplication and summation, etc. The initial values of the network weights are fixed as 1. It is shown that each network weight is non-increasing in the training process and remains unchanged once it is less than 0.5. The learning algorithm has an advantage, as proved in this paper, that it converges in a finite number of steps if the training patterns are fuzzily separable. This result generalizes a corresponding classical result for conventional linear perceptrons. Some numerical experiments for the learning algorithm are provided to support our theoretical findings.