Starting from:

$29

Inputs for Architecture Generation: Architecture Assessment Model

SysEng 6104 Project Task IV
Inputs for Architecture Generation: Architecture Assessment Model
1. Based the functional architecture that you have provided in question two in Project Task III select
a parameter that can measure functionality of each function, sub-function and sub-sub function.
Like (MOE – Measure of Effectiveness or MOP – Measure of Performance or TPM – Technical
Performance Measure.
Function: arrive near Asteroid vincinity – MOE - distance
Sub-function: Plan trajectory for system and humans – MOP – time and distance
Sub-sub-function: Calculate suitable distance away from Asteroid – TPM -
distance
Sub-function: Slow down to reasonable velocity and distance – MOP - velocity
Sub-sub-function: Cut engines – TPM - throttle
Sub-sub-function: Determine trajectory – TPM – velocity and distance in space
Sub-function: Anchor to surface of asteroid or nearby stable body – MOP – stability of
anchors
Sub-sub-function: Configure anchoring system – TPM – equidistant placement,
weight distribution
Function: Classify Asteroid – MOE – various composition metrics
Sub-function: Prospect Asteroid – MOP – square footage, thermal temperatures,
environmental conditions
Sub-sub-function: Select location of attaching mechanisms – TPM – strength of
attaching mechanisms compared to environmental conditions and load distribution
Sub-function: Map Asteroid Surface – MOP – square footage, distance, surface profile
Sub-sub-function: Incorporate sensor/satellite usage to perform topographical
analysis where needed – TPM - optical square footage refinement
Function: Continue analysis of Asteroid – MOE – various composition and environmental metrics
Sub-function: Consult additional resources if needed – MOP - various composition
metrics
Sub-sub-function: Prepare abstract qualitative/additional methodologies for a
more detailed understanding of Asteroid (assuming all quantitative methods have been used) –
TPM - various composition metrics
Sub-function: Utilize various methods to project worth of continuing mission – MOP - net
profit
Sub-sub-function: Apply relevant financial formulations on-the-spot – TPM - cost
estimation
Sub-function: Estimate level of difficulty to capture Asteroid – MOP - cost-benefit
estimation
Sub-sub-function: Find metrics to scale relative effectiveness of system’s
catching mechanisms – TPM - tensile strength
Function: Capture Asteroid – MOE – sustained weight
Sub-function: Prepare for release of catching mechanism(s) – MOP - time
Sub-sub-function: Configure hardware and software – TPM - power and time
Sub-function: Interface with catching mechanism(s) – MOP – time for learning interface
functionality, loading time between functions
Sub-sub-function: Establish hardware-software integration – TPM - power and
time
Sub-sub-function: Design operable interface – TPM - power and time
Function: Use 1 catching mechanism – MOP - weight of asteroid/catching mechanism’s strength
Sub-function: Release catching mechanism – MOP - velocity
Sub-sub-function: Press necessary hardware configurations - TPM - work
Sub-function: Decide if more catching mechanisms are needed – MOP - weight of
asteroid/catching mechanism’s strength
Sub-sub-function: Compare Asteroid’s size and composition – TPM - volume,
various densities
Sub-sub-function: Estimate necessary tensile strength to secure Asteroid – TPM
- tensile and yield strength
Function: Use 2-5 catching mechanisms – MOE - weight of asteroid/catching mechanism’s
strength
Sub-function: Process synchronicity of catching mechanisms – MOP - time between
release of catching mechanisms
Sub-sub-function: Gather standard time buffers for each mechanism added –
TPM - time between release of catching mechanisms, velocity of release
Sub-function: Monitor for stability of additional catching mechanisms – MOP - tensile
strength over time
Sub-sub-function: Measure added tensile strength individually and collectively –
TPM - tensile, yield strength as individual system-of-systems and as one system
Function: Use >5 catching mechanisms – MOE - weight of asteroid/catching mechanism’s
strength
Sub-function: Process synchronicity of catching mechanisms – MOP - time between
release of catching mechanisms
Sub-sub-function: Gather standard time buffers for each mechanism added –
TPM - time between release of catching mechanisms, velocity of release
Sub-function: Monitor for stability of additional catching mechanisms – MOP - tensile
strength over time
Sub-sub-function: Measure added tensile strength individually and collectively –
TPM - tensile, yield strength as individual system-of-systems and as one system
Function: Determine precise location for catching mechanism(s) placement(s) – MOE – center of
gravity of Asteroid, distributed load strength
Sub-function: Computationally analyze Asteroid to determine best placement(s) – MOP -
surface/core stability
Sub-sub-function: Apply realistic analysis equations – TPM - various composition
metrics
Function: Design catching mechanism’s sequence, sizes, and locations – MOE – system
optimization
Sub-function: Compare alternative methods – MOP – cost/system optimization
Sub-sub-function: Weigh KPP’s to find best solution – TPM – cost-benefit
analysis
Sub-sub-function: Conduct proper analysis – TPM – distance between
mechanisms, latching strength over distance
Function: Distribute catching mechanism(s) – MOE – time between release
Sub-function: Interface with operating system to release catching mechanism(s) – MOP –
loading time, loading time/initiated hardware movement
Sub-sub-function: Automate the release method – TPM - computational
specifications
Function: Attach catching mechanism(s) to Asteroid – MOE - work
Sub-function: Verify mechanism(s) have functioned correctly through the operating
system – MOP – projected location of catching mechanisms vs. actual location
Sub-sub-function: Design system of systems and method for attachment – TPM -
distance between mechanisms, latching strength over distance
Function: Retrieve/lock asteroid within system – MOE – distance Asteroid moves
Sub-function: Verify strength of locked connection – MOP - tensile, yield, and
compressive strength
Sub-sub-function: Connect sensors to point of contact for locked connections –
TPM - sensor specifications
Sub-function: Interface with operating system to retrieve/transport Asteroid – MOP –
distance capable of moving, weight capable of moving
Sub-sub-function: Design operating system for functional, easy use – TPM -
Setup time, Operational time/downtime
Function: Retrieve/lock secondary support mechanisms if needed – MOE – added margins of
distance Asteroid moves
Sub-function: Verify strength of locked connections – MOP – sensor accuracy, tensile
strength
Sub-sub-function: Test latching strength – TPM - tensile, compressive, and yield
strength
Sub-function: Integrate multiple mechanisms – MOP - Combined tensile, compressive,
and yield strength
Sub-sub-function: Design order/process of latching mechanisms – TPM - time
between each successive release of latching mechanisms
Sub-function: Interface with operating system to retrieve/transport Asteroid – MOP –
maximum extension distance, time to catching mechanisms can hold weight
Sub-sub-function: Apply mission training technical knowledge of system – TPM –
time to transfer mission training knowledge, operational time needed
Function: Continue adding sufficient catching mechanisms if needed – MOE – added tensile
strength
Sub-function: Compare data to past performance – MOP – data storage limits
Sub-sub-function: Consult data archives – TPM – number of documents available
Sub-sub-function: Autonomize decision-making – TPM - computational
specifications, software capability, operational time
Sub-function: Determine cost-benefit to adding more catching mechanisms – MOP
structural analysis, cost-benefit of adding catching mechanisms
Sub-sub-function: Integrate artificial learning to optimize when to add more
catching mechanisms – TPM – computational specifications, software capability, operational time
2. The terms “Unacceptable”, “Marginal”, “Acceptable” and “Excellent” are generally used to
assess the quality and validity of the architecture alternatives. Define the meaning of these terms
in reference to your system. What are the impacts of key performance attributes that you have
selected and defined as fuzzy terms in question one of Project Task III?
Sizable: The sizing of my asteroid mining system must be large enough to prove its scalability
through testing to a full size mission operation but also small enough to reduce costs, reduce
manufacturing time, and focus on simplicity. Excellent sizability would be anything less than 24
inches (height) x 24 inches (width) x 24 inches (length). Acceptable sizability would be anything
less than 48 inches (height) x 48 inches (width) x 48 inches (length). Acceptable sizability would
also include if 2 out of 3 measures were within 48 inches but one measurement was greater than
48 inches but less than 96 inches. Marginal sizabiilty would be any size between 48 inches to 96
inches by all three measures or 2 sizes between 28 inches to 96 inches and 1 measure greater
than 96 inches. Unacceptable sizability would be any size greater than 96 inches by all three
measures or 2/3 measures.
Uncostly: All costs associated with my asteroid mining system must be affordable by myself given
the limited if any additional outside funding. Excellent cost would be less than $50. Acceptable
cost would be less than $100. Marginal cost would be between $100-$200. Unacceptable cost
would be more than $200. All of this is assuming I would be the only one funding the system. If
outside funding is available the costs would be readjusted for what’s excellent, acceptable,
marginal, and unacceptable.
Traceable: All documentation associated with my asteroid mining system must be applicable to
my system and complete. Excellent traceability would be all documentation required by the
course completed with inaccuracies/imperfections tuned to represent my system as accurate as
possible. Acceptable traceability would be all documentation required by the course completed
with less than 5 documents containing inaccuracies/imperfections. Marginal traceability would be
any missing documents required by the course or 5-10 documents containing
inaccuracies/imperfections. Unacceptable would be greater than 10 documents containing
inaccuracies/imperfections representing my system or any missing/incomplete documents
required by the course.
Durable: the durability of my asteroid mining system must be sufficient to survive shocks, space
debris, Earth debris, and withstand slowing an asteroid down enough for transportability without
losing its functionality. Excellent durability would be tensile strength greater than 75,000 PSI at
every point on the system. Acceptable durability would be tensile strength greater than 60,000
PSI at every point on the system. Marginal durability would be greater than 45,000 PSI at every
point on the system. Unacceptable durability would be less than 45,000 PSI at any point on the
system. Failure analysis can be conducted after a CAD is produced to substitute for actual
testing.
Operable: The operability of my asteroid mining system must be easy to understand and apply to
its functionality. Excellent operability would be for a user with no prior experience using the
system to figure out its operation in less than 30 minutes of training given whatever resources
necessary like a training manual or teacher instructions. Acceptable operability would be for a
user with no prior experience using the system to figure out all of its operations in less than 120
minutes of training given whatever resources necessary. Marginal operability would be between
120 minutes and 300 minutes for a user with no prior experience to learn how to use the system’s
operations given whatever resources necessary. Unacceptable operability would be greater than
300 minutes needed for a user with no prior experience to train to use the system operably given
whatever resources needed.
3. Give eight statements based on key performance attributes of your system to describe an
“Unacceptable”, “Marginal”, “Acceptable” and “Excellent” architecture. Provide a kiviat chart for
each statement. Here are two general examples.
Eg. “Architecture is unacceptable if it fails to reasonably compromise all key performance parameters”
 “Architecture is marginal if key performance 3 is greatly compromised over key performance 1”
Architecture is unacceptable if any of the key performance parameters are
absent.
Architecture is unacceptable if all the key performance parameters are more than
50% away from their minimum numerical values assigned as their respective
“excellent” categories.
Architecture is marginal if the key performance attributes “traceable” and
“operable” are conceded from reasonable values in the “acceptable” range over
prioritizing the attributes “uncostly”, “sizable”, and “durable”. Note: traceable
replaced defined, operable replace easy-to-use, and sizeable replaced
reasonable size.
Architecture is marginal if any of the key performance attributes are not
consistent enough to meet minimum measures of effectiveness, technical
performance measures, and measures of performance each time they’re
analyzed.
Architecture is marginal if any of the key performance attributes are not
consistent enough to meet “acceptable” or “excellent” measures each time
they’re analyzed.
Architecture is acceptable if all key performance attributes meet their “marginal”
categorical values.
Architecture is acceptable if four out of five performance attributes fall within 25%
of their “acceptable” categorical values.
Architecture is excellent if all key performance attributes fall within 10% of their
respective “excellent” categorical values.

More products