Performance Prediction of a S-CO2 Turbine

PDF Publication Title:

Performance Prediction of a S-CO2 Turbine ( performance-prediction-s-co2-turbine )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 007

Appl. Sci. 2020, 10, 4999 7 of 21 As shown in the sketch, the intermediate matrix can be obtained by input with padding around the original input matrix. Additionally, then the elements of output are the multiplication results of kernel and corresponding input elements moving at a specified stride. Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 24 Figure 2. A simple sketch of the deconvolutional layer. Convolutional neural networks became more and more popular in computer vision [33,34], nature language [35], and so on due to their powerful ability of feature extracting and learning. It is a natural idea to utilize convolutional neural networks to extract the low-dimensional performance from the high-dimensional physical fields. In mathematics, the convolution operation is a kind of multiplication of input and kernel at certain strides, as shown in Figure 3. Similar to Figure 2, the input, kernel, and output in this convolutional example are marked in blue, gray, and green, respectively. As shown in the sketch, the intermediate matrix can be obtained by input with padding around the original input matrix. Additionally, then the elements of output are the multiplication results of kernel and corresponding input elements moving at a specified stride. Figure 2. A simple sketch of the deconvolutional layer. Figure 2. A simple sketch of the deconvolutional layer. Convolutional n w re and more popular in computer vision [33,34], nature language [35], d so on due to their powerful ability of feature extracting and learning. It is a natural idea to utilize convolutional neural networks to extract the low-dimensional performance from the high-dimensional physical fields. In mathematics, the convolution operation is a kind of multiplication of input and kernel at certain strides, as shown in Figure 3. Similar to Figure 2, the input, kernel, and output in this convolutional example are marked in blue, gray, and green, respectively. As shown in the sketch, the intermediate matrix can be obtained by input with padding around the original input matrix. Additionally, then the-Inepultemen-tKsernoelf outp-Outptutare the multiplication eur al n et ork sb eca me mo an corr esp ond ing inp ut le ent results of kernel and e m s moving at a specified stride. -Convolutional result FFi iggu ur ree 33. . A A ssi im mp pl le e ssk ke et tcch h o of f t th hee cco on nv vo ol lu ut ti io on naal l l laayyeer r. . With the complexity of application scenarios, the convolutional neural network goes deeper and With the complexity of application scenarios, the convolutional neural network goes deeper and deeper. However, some obstacles such as degradation of training accuracy and vanishing/exploding deeper. However, some obstacles such as degradation of training accuracy and vanishing/exploding gradients arise with deeper layers. In order to avoid the above problems, some outstanding means, data gradients arise with deeper layers. In order to avoid the above problems, some outstanding means, normalization, intermediate normalization layers [36], and Residual Neural Network (ResNet) [37], data normalization, intermediate normalization layers [36], and Residual Neural Network (ResNet) were applied in our approach. In this study, input and output were normalized to (−1, 1) by maximum [37], were applied in our approach. In this study, input and output were normalized to (−1, 1) by and minimum normalization, and batch normalization was applied after every deconvolution or maximum and minimum normalization, and batch normalization was applied after every -Input -Kernel -Output convolution except for the output layer. The detailed architecture of our two-stage algorithm is listed in deconvolution or convolution except for the output layer. The detailed architecture of our two-stage -Convolutional result Table 1, in which Deconv2d means deconvolutional operation, Conv2d means convolutional operation, algorithm is listed in Table 1, in which Deconv2d means deconvolutional operation, Conv2d means k is the size of kernel, s is the stride size, c is the channel size of output, in is the input size of linear layer, convolutional operation, k is the size of kernel, s is the stride size, c is the channel size of output, in Figure 3. A simple sketch of the convolutional layer. and out is the output size of linear layer. For more convenient description, the building block in ResNet is the input size of linear layer, and out is the output size of linear layer. For more convenient is separated to the basic block (a pair of 3 × 3 filters) and shortcut (connection operation with identify) descrWipitihonth, ethceombupiledxinitgy obfloacpkpilnicaRteiosnNsecteinsasreiposa,rtahtedcotnovtohleubtiaosniaclbnleouckra(lanpetawiroorkf 3go×e3s dfieltepres)r and in this study. As shown in Table 1, the input of the field reconstruction model is firstly reshaped to d s h e o e p r t e c r u . t H ( c o o w n e n v e e c r t i , o s n o mo p e e o r a b t s i t o a n c l we s i t s h u c i d h e a n s t i d f y e ) g i r n a d t h a i t s i o s n t u o d f y t . r a A i n s i s n h g o wa c n c u i n r a T c a y b a l n e d 1 , v t a h n e i i s n h p i n u g t / o e f x t p h l e o d f i i e n l d g a feature of large size by linear layer for subsequent deconvolutional operations. The size of output grercaodniesntrtuscatrioisnemwiotdhedleiespfeirsltalyeress.hInapoerdetrotoaafevaotiudrethoefalbaorvgepsirzoeblbeymsli,nseoamrelaoyuetrstfaonrdsiunbgsmeqeuaennst, features become a specified 256 × 64 × 4 after the transformation of six deconvolutional layers and daectaonvoormluatiloiznatlioonp,eirnatteiromnse.dTiahte nsiozremoafliozuatipountlfaeyaetrusre[3s6b],eacnomd eReasisdpueaclifNieedu2ra5l6N×e6tw4 o×rk4 (aRfteesrNtehte) then the output features are interpolated to 256 × 64 × 4 and 256 × 64 × 4 for physical fields of stator [t3ra7n],sfwoermreaatipopnlioefdsixndoeucronavpoplruotaiochn.alInlatyheirsssatnudyth, einpthuet oanudtpouut tfpeauttuwreesraerenoinrmterapliozleadtetdoto(−215, 61)×b6y4 and rotor blades. The similar interpolation operation can be found in performance prediction model. m× 4axainmdu2m56 ×an6d4 ×m4ifnoirmpuhmysicnaolrfmiealdlizsaotfiosnta,toarnadndbraotctohr bnloardmesa.liTzhaetiosinmiwlaarsintaeprploieladtioanftoeprereavteioryn The physical fields with different sizes are adjusted to the specified size of 256 × 64 by an interpolation dcaencobnevfoluntidoninorpecorfnovrmoluanticoenperxecdeipctifoonr mthoedoeul.tpTuhtelapyheyrs.iTcahlefideeldtasilwedithardchififtercetnutresiozfesouaretwadoj-ustsategde operation which makes the performance prediction model away from the affection of input size. atolgtohreitshpmecifsielidstseidzeinofT2a5b6le× 614, ibnywanhicnhteDrpeocolantvio2ndomperaantsiodnewcohnivcholmutaioknesalthoeppereartfiornm, aCnocnevp2rdedmicetaionns After the subsequent convolutional operations from layer 1 to layer 6, the average pooling [38] and cmoondveolluatwioanyalfropmerathtieona,ffkecitsiothneosfizienpouf tkesirzne.l,Asftiserththeestsruidbesesqizuee,ncticsotnhveoclhuatinoneallsoizpeeoraftoiountps ufrto, imn linear layers are adopted to obtain objective output from extracted features. In addition, the active ilsaytehre1intopulatyseirze6, othf elinaveaeralgaeyepro, oalnindg o[3u8t] iasntdhelinoeuatrpulatyseirzseaoref alidnoeparteldaytoero. bFtoarinmoobrjectciovnevoeuntipeunt function ReLU [39] is employed to enhance the nonlinear performance of the deep convolutional description, the building block in ResNet is separated to the basic block (a pair of 3 × 3 filters) and neural networks. shortcut (connection operation with identify) in this study. As shown in Table 1, the input of the field reconstruction model is firstly reshaped to a feature of large size by linear layer for subsequent deconvolutional operations. The size of output features become a specified 256 × 64 × 4 after the transformation of six deconvolutional layers and then the output features are interpolated to 256 × 64 × 4 and 256 × 64 × 4 for physical fields of stator and rotor blades. The similar interpolation operation can be found in performance prediction model. The physical fields with different sizes are adjusted to the specified size of 256 × 64 by an interpolation operation which makes the performance prediction model away from the affection of input size. After the subsequent convolutional operations from

PDF Image | Performance Prediction of a S-CO2 Turbine

PDF Search Title:

Performance Prediction of a S-CO2 Turbine

Original File Name Searched:

applsci-10-04999.pdf

DIY PDF Search: Google It | Yahoo | Bing

NFT (Non Fungible Token): Buy our tech, design, development or system NFT and become part of our tech NFT network... More Info

IT XR Project Redstone NFT Available for Sale: NFT for high tech turbine design with one part 3D printed counter-rotating energy turbine. Be part of the future with this NFT. Can be bought and sold but only one design NFT exists. Royalties go to the developer (Infinity) to keep enhancing design and applications... More Info

Infinity Turbine IT XR Project Redstone Design: NFT for sale... NFT for high tech turbine design with one part 3D printed counter-rotating energy turbine. Includes all rights to this turbine design, including license for Fluid Handling Block I and II for the turbine assembly and housing. The NFT includes the blueprints (cad/cam), revenue streams, and all future development of the IT XR Project Redstone... More Info

Infinity Turbine ROT Radial Outflow Turbine 24 Design and Worldwide Rights: NFT for sale... NFT for the ROT 24 energy turbine. Be part of the future with this NFT. This design can be bought and sold but only one design NFT exists. You may manufacture the unit, or get the revenues from its sale from Infinity Turbine. Royalties go to the developer (Infinity) to keep enhancing design and applications... More Info

Infinity Supercritical CO2 10 Liter Extractor Design and Worldwide Rights: The Infinity Supercritical 10L CO2 extractor is for botanical oil extraction, which is rich in terpenes and can produce shelf ready full spectrum oil. With over 5 years of development, this industry leader mature extractor machine has been sold since 2015 and is part of many profitable businesses. The process can also be used for electrowinning, e-waste recycling, and lithium battery recycling, gold mining electronic wastes, precious metals. CO2 can also be used in a reverse fuel cell with nafion to make a gas-to-liquids fuel, such as methanol, ethanol and butanol or ethylene. Supercritical CO2 has also been used for treating nafion to make it more effective catalyst. This NFT is for the purchase of worldwide rights which includes the design. More Info

NFT (Non Fungible Token): Buy our tech, design, development or system NFT and become part of our tech NFT network... More Info

Infinity Turbine Products: Special for this month, any plans are $10,000 for complete Cad/Cam blueprints. License is for one build. Try before you buy a production license. May pay by Bitcoin or other Crypto. Products Page... More Info

CONTACT TEL: 608-238-6001 Email: greg@infinityturbine.com (Standard Web Page)