Ideas that I don't have time or funding to pursue at this time.
As indicated in [Lov09], it is possible to use system dynamics techniques - in particular, the use of diagrams to map qualitatively the architectures systems - to understand complex systems and direct design creativity. What exactly is possible with this technique? How can it be adapted to help discover design problems in complex systems and address them?
Another example of a qualitative system architecture of this type is the system influence diagram for obesity.
An arcology is essentially a single integrated structure that is a city. Though it never really took off in the 70's, mass customization, pre-fabrication, modularity, etc. were not really available. The key is to provide flexibility at the operational level within a framework of plug-and-play modules. Alexander's pattern language provides that flexibility, but is written for the architect/artist. If one assumes Alexander's pattern language as a foundation, how can one design a new arcology?
Some generic notes:
Geometric Calculus is a mathematical language for expressing and elaborating geometric concepts. It might be another way to model designed products.
Sites to Study:
Related to this is work on using graph theory for solving dynamics of mechanical systems. One approach is [AK75]:
This paper describes a procedure for applying graph theory to the analysis of general, dynamic, three-dimensional, lumped mechanical systems. The authors have observed the similarity between terminal graphs and three-dimensional vectors and have used it to develop the “vector-network model” which forms the bridge between vector methods and graph techniques. Previous applications of graph theory to scalar (one-dimensional) mechanical systems are seen to be a special case of vector-network model. The paper describes the construction and validity of the model, its use in formulating equations of motion and a prototype “self-formulating” computer program for dynamic simulation, based on the model.
FMEA is a very useful way to predict how a product can fail. It is commonly used in engineering design settings to attempt to predict how products can fail. This information is important to shorten lead-times by designing out flaws revealed by FMEA during development rather than after the product had gone to market.
Can the FMEA method be adapted to work more broadly - beyond just design engineering - as a predictive tool to conduct design analysis?
Concept evaluation with methods like pairwise comparison and weighted decision matrixes have no way to account for the [situation] of those who will use/interact with the artifact being designed. The result is that the evaluation is done in the moment and without any particular assurance that the evaluation accounts for the particulars of how products are used. Usage scenarios may provide an approximation of other situations, and could help designers more properly evaluate design concepts.
The research question is: Is there a way to use usage scenarios to promote concept evaluation that accounts for the kinds of situations that will arise once the artifact is interacting with other artifacts, people, etc?
Though not invented for the movie, spherical wheel drives (sometimes called ball wheels) got their 15 minutes of fame in the otherwise forgettable movie I, Robot. They kept the futuristic Audi RSQ that Will Smith's character on the road.
It strikes me that these things would be wonderfully useful. Some attempts have been made:
The question is: can we make this a viable technology for automobiles?
Extensive learning occurs during design. The designer must gain a deep understanding of the design problem (i.e. must learn it) in order to solve it. Since concept maps are learning aids, one might ask: Do concept maps contribute positively to the learning activities of design engineers?
To execute this project, CmapTools and VUE will be used to construct concept maps of various design problems, and to get practicing design engineers to use concept maps to help them explore design problems.
A visiting student from IFMA, Jerome Czyz, has written a report on the design of a connector for modules in reconfigurable robots (838KB). This report would make a wonderful case study of the application of QFD, but it must be rewritten with a more “educational” voice.
Originally, this work was the MASc Research Topic of Mr. Bing Ye.
Tolerances are one of the few key controllable parameters that allow engineers to control both manufacturing cost and quality of manufactured parts. However, this coupling has rarely been taking advantage of in the modeling of tolerance design (the process of assigning tolerances to parts to as to achieve some quantifiable optimum).
Past efforts can be grouped into three categories. In the serial approach, design tolerances are assigned by design engineers. Then manufacturing process tolerances are assigned by manufacturing engineers and process planners, subject to the design tolerances as constraints. Finally, quality engineers assess the quality loss and report this information back to other engineers as feedback to the process.
The second approach seeks to minimize the manufacturing cost by assigning appropriate tolerances in both design and manufacturing phases, using a maximum allowable quality loss as a constraint to limit the optimization process.
The third approach seeks to minimize the quality loss by assigning appropriate tolerances, and using a maximum allowable manufacturing cost as a constraint to limit the optimization process.
Clearly, none of these techniques take into account the inherent coupling between quality and manufacturing cost, and the concommitant coupling of effects on these of both design and manufacturing.
A new approach, Simultaneous Tolerance Synthesis for Manufacturing and Quality (STSMQ) has been developed and appears to successfully optimize part tolerances so as to minimize the combined total quality loss and manufacturing cost. This technique is a strong concurrent engineering technique because it (a) focuses on the life-cycle total cost of a part, and (b) it integrates the synthesis process, rather than having it occur in distinct design and manufacturing phases.
Preliminary experiments have shown that over the life cycle of a part, the STSMQ technique shows improvement of all other methods investigated, sometimes as high as 30%.
There are various limitations on the technique at this time, but they arise from the lack of mathematical models for mapping different application domains into the model. The current implementation works only with dimensional tolerances of mechanical parts.
Future work includes expanding the flexibility of the technique to include other kinds of tolerances, as well as expanding its applicability to other domains besides that of mechanical parts. Another important potential extension involves incorporating other aspects/phases of the product life cycle: tolerances affect assembly as well as manufacture - what particular benefits can be reaped from modeling assembly tolerances as separate items from manufacturing and design tolerances?
Intelligent Simulation incorporates AI into simulation software. For this project, I'm interested in using AI to add control and analysis capabilities to the simulation of industrial processes (e.g. manufacturing systems) that would otherwise be relegated to human simulation experts. Currently, I'm trying to blend concepts of reinforcement learning and case-based reasoning into an agent that, working with another agent or system that can simulate an industrial process, can suggest strategies to handle failures of industrial system components, such as machines, so as to minimize the effects of those failures on productivity of the system as a whole.
An intelligent simulation architecture for a combined RL/CBR intelligent agent is under development.
There are similarities between game-playing in the mathematical sense, and design development. Game Theory includes formalisms for strategy, tactics, and reasoning about what other “players” know, whether they are working with you or against you. Can the theory of games be applied to design processes to yield new insights and best practices?
Design is essentially synthetic, which implies designers are often working with a lack of data, or a lack of well-defined data. To reason well about designs, then, some means is needed to rationally discuss ill-defined situations, problems, information, etc. Interval calculus is one way of treating this; fuzzy logic is another way. Both these techniques are essentially representational, in that they provide a structure for representing qualitative information formally. But there's no concensus on the best inference (reasoning) systems for these representations.
One idea in this regard is Kuipers theory of qualitative reasoning. It takes advantage of a representation like interval calculus or fuzzy logic, and allows reasoning to be carried out in a methodical manner.
How can qualitative reasoning be incorporated into ALX3d? Is fuzzy logic better for general engineering design purposes than interval calculus?
Bond graph theory (BGT) is a very elegant modeling system that has never been successfully computerized. Nor has it submitted to a rigorous formal development. BGT relates to modeling of both form and function and has been extensively used in industry. But how does it relate to knowledge-based representations in design? Besides providing a pretty good model of interacting systems, does BGT help us develop KBSs for engineering? Or vice versa?
In an age of green engineering, it is becoming more and more important to develop methodologies to improve disassembly of products that have reached the end of their lives. How can knowledge for disassembly be integrated with knowledge for design? The issue is not as easy as it seems. For example, parts that were assembled cannot necessarily be disassembled – so there is no necessary “inverse” process; two welded parts cannot be disassembled, even though welding is an assembly process – they have to be “de-manufactured.” How can this kind of knowledge be incorporated into a universal knowledge base for design?
In order to classify design information and artifacts, a classification scheme is required. Classifications that are one-dimensional (a strict hierarchy) has often been proposed. But such hierarchies fail to be sufficiently concise and flexible. Similar problems have been encountered by library scientists in the development and use of the Dewey Decimal System and related systems. Library science has met this challenge through the development of facetted classification schemes that permit an n-dimensional space of classification criteria. Resulting organizations of items resemble networks (multiple, overlapping hierarchies). The goal of this project is to develop a facet-based taxonomy for design engineering. Applications include (a) organizing the body of knowledge of design engineering, (b) improved organization to online resources, and © a deeper understanding of the interrelationships between aspects of design engineering.
When are best practices better than standards? A best practice is a solution strategy that generates typically near-optimal solutions, that evolves over time typically through trial and error. A standard is intentionally developed to meet a need, with full conscious intent and with a systematic development. Best practices are typically very responsive to contextual details, whereas standards are not necessarily so. Best practices tend to be more flexible, but less well controlled, than standards. What exactly are the differences? When is one approach better than the other? Can the merits of both be combined into a single paradigm?
Given the tragedy of the True North 2, I propose a research project that will build a low-cost expert system to present a checklist to an inspector based on the regulations. It has been noted by various experts that the maritime regulations governing such vessels is extremely complicated, and that the inspectors are overworked an unable to track the regulations themselves. By implementing the regulations in an expert system, a checklist can be presented to the inspector, the responses of which might dynamically alter the checklist itself. The dynamic nature of the checklist ensures that the right questions are asked and the right regulations are enforced by the inspector, while removing much of the difficult administrative work of figuring out which regulations apply. Also, by keeping the checklists online as web pages, (semi)automated analysis studies of which ships have had consistent violation histories can be carried out.