Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

CSM Testbed: Structural Analysis for Aerospace Engineering, Schemes and Mind Maps of Structural Analysis

The CSM Testbed is a software system developed at NASA Langley Research Center to provide a common structural analysis environment for engineers, researchers, and developers. It integrates various computer systems, including graphics workstations, mini-supercomputers, and supercomputers, to enable large-scale nonlinear stress analyses of shell-type structures. The CSM Testbed features a Fortran macro-processor, data management utilities, and structural analysis capabilities, including direct banded solvers, sparse out-of-core Choleski equation solver, and various element processors. Researchers interact using CLAMP procedures or Fortran processors to study nonlinear solution strategies and implement new element formulations.

Typology: Schemes and Mind Maps

2021/2022

Uploaded on 08/05/2022

hal_s95
hal_s95 🇵🇭

4.4

(652)

10K documents

1 / 105

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
NASA
Technical Memorandum
100643
Large-Scale Structural Analysis:
The Structural Analyst, The
CSM
Testbed,
and
The
NAS
System
Norman
F.
Knight,
Jr., Susan
L.
McCleary,
Steven
C.
Macy, and Mohammad
A.
Aminpour
lNASA-Ta-
100643)
LABGB-SCALE
STRUCTURAL
1y8
9
-2
46
7
3
AHALYSIS:
THE
STRUCTURAL
IllSALYST,
THE
CSPI
TESTBED
AlD
THE
Mas
SXSTEH
INAS&.
Langley
Besearch
Center)
106
p
CSCL
2oK
Unclas
G3/39
0212652
March
1989
National Aeronautics and
Space Ad
mi
n
ist
ration
Langley Research Center
Hampton, Virginia
23665-5225
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf50
pf51
pf52
pf53
pf54
pf55
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download CSM Testbed: Structural Analysis for Aerospace Engineering and more Schemes and Mind Maps Structural Analysis in PDF only on Docsity!

NASA Technical M e m o r a n d u m 100643

Large-Scale Structural Analysis:

T h e Structural Analyst, T h e CSM Testbed,

and T h e NAS System

Norman F. Knight, Jr., Susan L. McCleary,

Steven C. Macy, and Mohammad A. Aminpour

lNASA-Ta- 100643) LABGB-SCALE STRUCTURAL 1 y 89 -2 467 3

AHALYSIS: THE STRUCTURAL IllSALYST, THE CSPI TESTBED Besearch A l DC e n t e r ) THE Mas 106 SXSTEH p I N A S &. LangleyCSCL 2oK Unclas

G3/39 0212652

March 1989

National Aeronautics and Space Ad mi n istration Langley Research Center Hampton, Virginia 23665-

Preface

A research activity named Computational Structural Mechanics, or CSM, a t the N A S A Langley Research Center is described. T h i s activity is developing advanced structural analy- sis and computatiopal methods t h a t exploit high-performance computers. New methods are developed in t h e framework of t h e CSM Testbed software system and applied to representa-

tive complex structural analysis problems from t h e aerospace industry. An overview of t h e

CSM Testbed methods development environment is presented. Selected application studies performed on t h e NAS CRAY-2 computer system are also summarized.

I

List of Figures

I

Figure Title

1. Integrated computing environment.

2. Concept of the CSM Testbed software system. 3. Implementation of the CSM Testbed software system.

  1. Distributed computing environment of CSM. 5. Generic element processor template. 6. Finite element model of composite hat-stiffened panel.

7. Repeating element for composite hat-stiffened panel.

8. Buckling load interaction diagram. 9. Buckling mode shapes for composite hat-stiffened panel - 4-node model. 10. Composite blade-stiffened panel with discontinuous stiffener. 11. Finite element model of composite blade-stiffened panel. 12. Test and analysis correlation for end-shortening results 13. Comparison of moire-fringe pattern from test with contour

14. End-shortening results for composite blade-stiffened panels.

15. Test and analysis correlation for end-shortening results 16. Out-of-plane deflection a t hole and blade-stiffener. 17. Deformed geometry shapes with Nx distributions. 18. Longitudinal inplane stress resultant Nx distributions a t

19. Tsai-Hill criterion for outer fiber surface of panel skin.

for blade-stiffened panel.

plot of out-of-plane deflections from analysis.

for composite blade-stiffened panel with a discontinuous stiffener.

panel midlength.

20. Sources of interlaminar stress gradients. 21. Three-dimensional composite problem.

Pave No.

5 6 7 9 17 22 23 24 25

27 28 29

30

31 32

32 33 34

35 39 39

I

... I l l

  1. Finite element models of the three-dimensional

23. lnterlaminar normal stress uz distribution along the interface

24. Normal stress uz distribution a t the free-edge

  1. Cylinder with cutouts - geometry, properties, and loading.
  2. Finite element models of cylinder with cutouts.
  3. Nonlinear response of cylinder with cutouts - Out-of-plane deflections.
  4. Deformed geometry plots for several load steps
    • Mesh 3 results.

29. Axial stress a t x = 0 for various load steps

  • Mesh 3 results.
  1. Axial stress a t x = 4.5 in. for various load steps
  • Mesh 3 results.

31. Comparison of Testbcd and STAGSC-1 results for the nonlinear

response of cylinder with cutouts- Out-of-plane deflections.

32. Half model of cylinder with cutouts using Cnode elements.

33. Comparison of eighth-model and half-model results for nonlinear

response of cylinder with cutouts - Out-of-plane deflections.

  1. Deformed geometry plots for several load steps
    • Half-model results.
  2. Pear-shaped cylinder - geometry, properties, and loading.
  3. Finite element models of pear-shaped cylinder.
  4. Nonlinear response of pear-shaped cylinder.
  5. Nonlinear response of pear-shaped cylinder - Mesh 2. 39. Normal deflection distribution a t cylinder midlength for
  6. Longitudinal inplane stress resultant distribution a t cylinder
  7. Deformed geometry plots for several load steps.
  8. Truncated conical shell - geometry, properties, and loading.

43. Finite element model of truncated conical shell.

  1. Normal deflections a t points "a" and "b" on truncated

composite plate.

between the 90-degree and 0-degree layers.

in the thickness direction.

various load steps.

midlength for various load steps.

conical shell.

i V

List o f Tables

Table T i t l e

1. Selected CSM Testbed processors. 2. Summary of current ESi processors. 3. Sample linear stress analysis runstrearn f o r t h e CSM Testbed. 4. Comparison of buckling results for composite hat-stiffened panel.

5. Selected processor execution times for

hat-stiffened panel.

6. Selected processor execution times for composite

blade-stiffened panel w i t h a discontinuous stiffener.

  1. Performance of direct solvers in processor BAND.

8. Selected processor execution times for 3-D stress analysis.

  1. Elastic collapse loads for cylinder w i t h cutouts - ;-model results. 10. Selected processor execution times f o r cylinder with cutouts. 11. Selected processor execution times for cylinder w i t h cutouts. 12. Elastic collapse loads f o r pear-shaped cylinder. 13. Selected processor execution times for pear-shaped cylinder. 14. Selected processor execution times for truncated conical shell. 15. Selected processor execution times for SRM tang-clevis joint. 16. Selected processor execution times for global SRB shell model.

vi

Patze No.

20 25 26

36 37 44 47 54 54 60 63 67 73 80

LARGE-SCALE S T R U C T U R A L ANALYSIS:

T h e S t r u c t u r a l Analyst, T h e CSM T e s t b e d , a n d T h e NAS S y s t e m

Norman F. Knight, Jr.t, Susan L. McClearyS, Steven C. Macy$, and M o h a m m a d A. Aminpour'

NASA Langley Research Center

H a mpton, Virginia

I n t r o d u c t i o n

Over t h e past decade, t h e structural analyst has had to adapt to a changing c o m p u t i n g en- vironment. T h e computing environment^ includes software as well as hardware.^ Research^ in computational methods for structural analysis has been severely hampered by t h e complexity

a n d cost of t h e software development process. Although usually interested in only a small

aspect of t h e overall analysis problem, each researcher is often forced to construct m u c h of t h e supporting software. T h i s time-consuming and expensive approach is frequently required because existing software t h a t the researcher could potentially exploit is not documented in

sufficient detail internally, may not b e suitable because of software architecture design, or both.

A f t e r enduring this time-consuming software development effort, t h e researcher may find t h a t

a thorough evaluation of t h e new method is still impossible due to limitations of t h e supporting software. T h i s scenario is true for many "research-oriented" finite element codes which have a limited element library or have a problem-size limit because of the use of a memory-resident equation solver. In addition, new computer architectures w i t h vector and multiprocessor capa- bilities are being manufactured for increased computational power. Analysis a n d computational algorithms t h a t can exploit these new computer architectures need to b e developed. For cred- ibility, these new algorithms should b e developed and evaluated in a standard, general-purpose finite element structural analysis software system rather t h a n in a n isolated research software system.

A t t h e NASA Langley Research Center, a n intense effort is being directed towards develop-

ing advanced structural analysis methods and identifying the requirements of t h e next genera- tion structural analysis software system which will exploit multiple vector processor computers t Aerospace Engineer, Structural Mechanics Branch, Structural Mechanics Division. 3 Structural Engineers, Planning Research Corporation.

  • (^) Research Scientist, Analytical Services and Materials, Inc.

1

CDC CYBER 175 computer system using the NOS operating system. The third type was the supercomputer such as a CDC VPS/32 supercomputer using the VSOS operating system.

Each of these systems were used for a variety of structural analysis application studies and for

the development of new structural analysis methods. However, the inter-machine communi-

cation link was limited to exchanging magnetic tapes between computer types. A knowledge

of several computer operating systems was required for the user to work effectively on several

computer types. The limitations of these stand-alone systems with vendor-specific operating

systems and communication protocols (e.g., VAX/VMS, CDC/NOS) were soon realized, and distributed computing environments were developed.

Distributed Systems

Distributed environments involve several types of computer systems; namely, personal com- puters ( PC's), graphics workstations, minicomputers, mainframes, and supercomputers linked

together through networks. An example of a distributed system is the coupling of a DEC

VAX/VMS minicomputer through DECnet and a CRAY-station software system with a CRAY

X-MP/48 supercomputer running the COS operating system. The changes occurring in the

field of computer networking represent probably the most dramatic changes affecting structural

analysts. Networking removed the constraint of physical distance. Working remotely from a

supercomputer presented a new set of problems that, once solved, resulted in a unique new

capability for the structural analysts.

The network a t the NASA Langley Research Center uses Ethernet within buildings and a

fiber optic Pronet 10 token-passing ring network called LARCnet between buildings. Initially

the gateways between buildings would route only a Xerox XNS-based protocol developed a t

Langley. Connecting to one of the Ethernets was a Vitalink Bridge that routed both TCP/IP

and DECnet to the NAS facility a t Ames over a 256 kilobits per second satellite link. The

evolution of this network has followed the networks developing in industry. Workstations in

many buildings are supported with routing gateways through the LARCnet fiber optic system,

through a Pronet P4200 gateway connected to the Vitalink directly. The communication link

with Ames has been upgraded to a one megabit per second transfer rate (;.e., T1 link), and it

was discovered that a land line is preferable to a satellite link for interactive use. The result

is that the miles between Ames and Langley are no longer a problem; Langley researchers

can use the NAS system a t Ames as easily as if it were located a t Langley. The NAS CRAY-

3

supercomputer appears to t h e structural analyst as if it were embedded in t h e local workstation.

T h e NAS CRAY-2 supercomputer uses t h e UNlCOSt operating system, has four proces-

sors (each w i t h a clock-cycle t i m e of 4.1 nanoseconds), and has a t o t a l memory size of 256

million 64-bit words. T h e CRAY-2 supercomputer is capable of over one hundred times t h e

computational capability of a VAX 11/785 minicomputer. In addition, t h e CRAY-2 computer

system is a native 64-bit wordsize machine, and roundoff problems t h a t can b e a problem on

32-bit machines are usually eliminated. Even w i t h 256 million words of m a i n memory, finite

element system matrices f o r large-scale structural analysis m a y n o t fit in memory. Auxiliary

data storage requirements for these analyses is another concern. Single temporary files m a y

require in excess of 500 megabytes of storage. Hence, coordination or scheduling of these runs

by t h e analyst is necessary to avoid exceeding the available auxiliary storage.

Distributed computer environments are made up of stand-alone computers of different sizes, architectures, and vendors, w i t h a c o m m o n network protocol offering the user easy file transfer and remote login functions. Structural analysts require t h e diverse computer capabilities offered

b y a distributed environment (workstation-mainframe-supercomputer), but cannot afford the

“overhead” of learning t h e operating system commands f o r each system they use. Software

developers have a similar problem, but a t a lower level. They cannot afford t h e “overhead”

of learning a new set of system calls f o r each computer on which they wish to implement

their application software. To alleviate this “overhead”, integrated computing environments

are evolving which exhibit a c o m m o n operating system.

t T h e UNICOS operating system is derived from the A T & T U N l X System V operating system. UNICOS is also based in part on the Fourth Berkeley Software Distribution under license f r o m T h e Regents of the University of California.

applications programs, a modular, public-domain, machine-independent, architecturally-simple, software development environment has been constructed. This^ system^ is^ denoted the^ CSM

Testbed software system and its concept is depicted by a pyramid (see figure 2). The base

of the pyramid is the computer and i t s operating system. The computer operating system is

provided by the computer vendor and may be different for each vendor. Currently, the CSM

Testbed is primarily targetted for U NIX-based systems in order to minimize these differences.

The Testbed architecture insulates both the engineer and the methods developer from those

differences by providing a consistent interface across various computer systems. The Testbed

command language CLAMP procedures and application processors may be accessed as part

of a methods research activity or as part of an application study. The methods development

environment of the CSM Testbed is further described by Gillian and Lottsg. One goal of the

CSM Testbed is to provide a common structural analysis environment for three types of users

  • engineers solving complex structures problems, researchers developing advanced structural analysis methods, and developers designing the software architecture to exploit multiprocessor computers.

Corn mand/database

Fig. 2 Concept of the CSM Testbed software system.

T h e CSM Testbed software system is a highly modular and flexible structural analysis

system for studying computational methods and for exploring new multiprocessor and vector

computers. T h e CSM Testbed is used by a group of researchers from universities, industry,

a n d government ageticies. Unrestricted access to all parts of t h e code including t h e data

Research on these elements of software

design is needed because deficiencies in t h e data management strategy can have a devastating

I

!I (^) manager a n d t h e command language is permitted.

III

I impact^ on^ t h e performance^ of^ a^ large structural analysis code,^ t o t a l l y masking t h e relative merits of competing computational techniques. (^) Furthermore, software designs t h a t exploit multiprocessor computers m u s t b e developed; in particular, techniques for handling parallel

input/output (I/O) are required.

I

T h e initial CSM Testbed, called NICE/SPAR, began with t h e integration of t h e NICE

system (FelippalO a n d Felippa and S t a n l e y l l ) and Level 13 of SPAR (Whetstonel*). Since

then, new capabilities and improvements have been implemented in t h e CSM Testbed. Each step of t h e evolution of t h e CSM Testbed provides improved structural analysis capabilities to structural analysts. Implementation of new capabilities is done using t h e framework of t h e CSM Testbed as depicted in figure 3. A brief description of selected CSM Testbed processors

is given in Table 1.

& Command input stream

II

I Super CLIP I Process0r Environment Manager Stack^

Symbo Table I I

GAL Library Status

Independent Processors

I I I' Database

GAL Libraries

0 0 e

Fig. 3 Implementation of t h e CSM Testbed Software System.

CSM

CRAY-

CSM/Testbed

LaRC

NASnet (1Mb/s) UN IX workstation^ a

I I (^00) Fig. 4 Distributed computing environment of CSM.

Computing Environment

The computing environment of the CSM activity is currently a distributed environment as

shown in figure 4. Typically, a structural analyst will develop a finite element model of the

structure either by using a preprocessing software system such as PATRAN or by using the

CSM Testbed command language for “parameterizing” the model. Runstreams are the vehicle

used to perform structural analyses with the CSM Testbed. The term “runstream” most

commonly refers to the file (or files) of input data and commands used to perform a specific

analysis, although it may also refer to input during an interactive session. Runstreams for the CSM Testbed are usually developed and verified on a workstation, and then transferred to the

NAS CRAY-2 computer system for complete processing. Following a successful execution, the

computational database may then be “unloaded” (;.e., converted from the binary format of the

NAS CRAY-2 computer system to ASCII format), transferred intact to the Langley Research

Center using the NASnet wide-area network, and then “loaded” (;.e., converted from ASCII

format to the binary format of the desired workstation) back into a computational database

which has the identical Testbed library format as on the NAS CRAY-2 computer system.

Finally, postprocessing is done to help the structural analyst visualize the computed structural

response. The sequence of steps just described depicts the computing environment to which

the structural analyst must adapt in order to exploit the full potential of available computing

systems.

To exploit t h i s new c o m p u t i n g environment, expertise is needed in t h e areas of com-

put a t ional strategies, numerical techniques, c o m puter science, and communication networks,

together with a f i r m understanding of t h e principles of structural mechanics. N e w c o m p u t i n g

hardware environments, like t h e NAS System, offer t h e computational power, memory, and

disk space necessary for routine analysis of large structural models. N e w c o m p u t i n g software

environments, like t h e CSM Testbed, offer a n integrated system with data management, a

general command language, a n d many different application processors - features t h a t enable t h e structural analyst to^ develop new analysis methods and^ to^ tailor t h e analysis^ for^ specific application needs.

CSM Testbed Architecture Features

T h e CSM Testbed is a Fortran program organized as a single executable file, called a

macro-processor, which calls structural applications modules t h a t have been incorporated as

subroutines. The macro-processor and applications modules interface with t h e operating system

for their command input and data management functions through a set of c o m m o n “architec- t u r a l utilities”. Processors access t h e Testbed utilities by calling entry points implemented as Fortran-77 functions and subroutines which are available in t h e Testbed object libraries. Appli-

cations processors do not communicate directly with each other, but instead communicate by

exchanging named data objects in a database managed by a data manager called GAL (Global

Access Library). T h e user controls execution of applications processors using a n interactive, or

batch, c o m m a n d runstream w r i t t e n in a command language, called CLAMP (Command Lan-

guage for Applied Mechanics Processors), which is processed by CLIP (Command Language

Interpreter Program). C o m m a n d Language

T h e Testbed command language CLAMP^ is a generic language originally designed^ to^ sup-

port t h e NICE system a n d to offer program developers t h e means for building problem-oriented

languages (Felippa13-15). It m a y b e viewed as a stream of free-field c o m m a n d records read

from a n appropriate c o m m a n d source (the user’s terminal, actual files, or processor messages).

T h e commands are interpreted by a “filter” utility called CLIP, whose function is to produce

object records for use b y i t s user program. T h e standard operating m o d e of CLIP is t h e

processor-command mode. Commands are directly supplied by t h e user, retrieved from ordi-

nary card-image files, or extracted from t h e global database, a n d submitted to t h e running

10

1

Since database files are subdivided or partitioned into datasets, t h e Testbed data manager is

classified as a file partition manager. To a processor, a GAL data library is analogous to a file.

It must b e opened, written, read, closed, a n d deleted explicitly. T h e global access library resides on a direct-access disk file and contains a directory structure called a table of contents (TOC) through which specific datasets m a y b e addressed. Low-level 1/0 routines access t h e GAL library file in a word-addressable scheme as described by Felippa18. T h e data management system is accessible to t h e user through t h e command language directives and to t h e running processors through t h e GAL-Processor interface. T h e global database is made up of sets of data libraries residing on direct-access disk files. D a t a libraries are collections of named datasets, which are collections of dataset records. T h e

data library f o r m a t supported by t h e Testbed is called GAL/82, which c a n contain nominal

datasets made up of named records. Some of t h e advantages of using t h i s form of data library are: I') t h e order in which records are defined is irrelevant, ii) t h e data contained in t h e records

m a y b e accessed from t h e c o m m a n d level, a n d i;;) t h e record data t y p e is maintained by t h e

manager; this simplifies context-directed display operations and automatic t y p e conversion.

To provide t h e e f k i e n c y required to process t h e volume of d a t a required for a complex

structural analysis, all usual overhead associated with Fortran has been eliminated. T h e actual

1/0 interface between t h e GAL data manager and t h e UNIX operating system is accomplished through a set of block 1/0 routines w r i t t e n in t h e C programming language. For non-UNIX computer systems, t h i s interface is accomplished through a set of assembly-language routines which are unique to each computer system. User Interface

T h e user m a y develop runstreams using t h e high-level c o m m a n d language CLAMP for

a specific engineering problem (e.g., F e l i p ~ a l ~ , ~ ~ ) .These runstreams m a y contain CLAMP

directives a n d CLAMP procedures which are processed by t h e c o m m a n d language interpreter

CLIP. Application processors are called using t h e command, or t h e global access

library GAL (e.g., Wright e t a1.16) m a y b e interrogated. Engineers typically interact with t h e

Testbed using simple runstreams or through CLAMP procedures. Researchers interact using

CLAMP procedures (e.g., to study nonlinear solution strategies) or through Fortran processors

(e.g., to implement new element formulations). Developers interact with t h e entire Testbed

architecture, including t h e design of t h e command language, t h e data handling techniques for

[XQT

large-scale analyses, a n d t h e strategy for 1/0 on parallel computers.

CSM Testbed Structural Analysis Features

T h e CSM Testbed presently provides structural analysis capabilities t h a t permit a n ana-

lyst to perform large-scale nonlinear stress analyses of shell-type structures. Three-dimensional stress analyses are presently limited to linear elastic orthotropic materials. Eigenvalue problems

associated with either linear bifurcation buckling or linear vibration analyses m a y also b e solved.

Transient dynamic analyses are limited to linear elastic problems using either direct t i m e inte- gration or m o d e superposition to obtain t h e transient response. Some of t h e newly-developed engineering features of t h e CSM Testbed are t h e equation solvers, t h e element library, t h e

material modeling, a n d t h e solution procedures. Interface utilities to and from t h e PATRAN

graphics systems have been developed to support t h e modeling a n d analysis of large-scale structures. Access to such a preprocessing and postprocessing software system enhances t h e structural analyst's ability to understand t h e structural behavior through visualization of t h e computed results. Equation Solvers T h e system of^ equations t h a t arise^ in^ static structural analysis applications has t h e general

form Ku = f where K is t h e symmetric, positive definite stiffness matrix, f is t h e load vector,

a n d u is t h e vector of generalized displacements. Such linear systems can b e as large as several

hundred thousand degrees-of-freedom (dog and often require significant c o m p u t i n g resources,

both memory and execution time. T h e structure of t h e stiffness matrices in these applications

is often sparse, although in m a n y applications an ordering of the nodes which minimizes the b a n d w i d t h makes banded or profile (skyline) t y p e storage of these matrices practical. T h e choice of t h e particular m e t h o d used to solve K u = f will depend on t h e non-zero structure

of K and, in t h e case of t h e iterative methods, t h e condition number of K. In addition, t h e

architecture of t h e computer, particularly for modern vector and parallel computers, influences

both t h e choice a n d implementation of methods used to solve these linear systems of equations.

O r t e g a l Q presents a thorough description of these various methods a n d their implementations as applied to vector a n d parallel computers. T h e data structure of t h e global stiffness m a t r i x is a key factor in t h e design and implemen-

t a t i o n of equation solvers for t h e CRAY-2 architecture and t h e Testbed software (e.g., Poole

a n d Overman *'). T h e generation of stiffness matrices is accomplished by several different

13