• It has come to our attention that some users may have been "banned" when they tried to change their passwords after the site was hacked due to a glitch in the old vBulletin software. This would have occurred around the end of February and does not apply after the site was converted to Xenforo. If you believe you were affected by this, please contact a staff member or use the Contact us link at the bottom of any forum page.

Ethiopian Airlines B737 crashes

  • Thread starter Deleted member 51920
  • Start date

Pabilito

Well-known member
Joined
Feb 24, 2008
Messages
5,598
The point I'm making is that regulators like the FAA can't be expected to find bugs in software.

But they are obliged to ensure that manufacturers have systems and standards in place that optimally avoid, discover and rectify bugs or scrap flawed product designs before they come to market.
 

Trampas

Well-known member
Joined
Oct 30, 2007
Messages
15,269
I would speculate that had the first crash happened with a European or American airline then perhaps the 737 Max would have been grounded there and then so that the second crash would not have happened at all. Indonesian airlines have a poor reputation. Both Lion Air and Garuda have been banned from flying in EU airspace at various times...…….although they are permitted to fly now.
 

General Mayhem

Well-known member
Joined
Sep 20, 2011
Messages
11,207
I would speculate that had the first crash happened with a European or American airline then perhaps the 737 Max would have been grounded there and then so that the second crash would not have happened at all. Indonesian airlines have a poor reputation. Both Lion Air and Garuda have been banned from flying in EU airspace at various times...…….although they are permitted to fly now.
Depends who sets the reputation, I guess.

Swamp draining is good crack, in fairness.
 

Orbit v2

Well-known member
Joined
Dec 8, 2010
Messages
11,563
Don’t you think Boeing has the biggest interest in ensuring its product is fit for purpose?

Why didn’t Boeing put more than one sensor?

Boeing is a private company, the FAA is not.

Reputation only affects the bottom line of one of these.
I read something about it. It was to do with the assessment of how dangerous a failure of the system would be. It was considered that a failure would not be catastrophic, and I'd accept that is a mistake the FAA are partly responsible for. So, when the risk was lower than this level, only one sensor was required.

But to address your first question, the answer is obviously yes. They didn't set out to design something dangerous. But, maybe they can be accused of cutting costs to the bone, and barely meeting (inadequate) standards.
 
Last edited:

Nebuchadnezzar

Well-known member
Joined
Mar 15, 2011
Messages
10,770
I read something about it. It was to do with the assessment of how dangerous a failure of the system would be. It was considered that a failure would not be catastrophic, and I'd accept that is a mistake the FAA are partly responsible for. So, when the risk was lower than this level, only one sensor was required.

But to address your first question, the answer is obviously yes. They didn't set out to design something dangerous. But, maybe they can be accused of cutting costs to the bone, and barely meeting standards.
That in combination with the fact that when the Max was brought into service and until the aftermath of the Lion Air crash that the pilots knew nothing about MCAS.....no training and nothing in their manuals about it. It’s very hard to understand how Boeing could go through the development program with these as deliberate decisions.
 

Pabilito

Well-known member
Joined
Feb 24, 2008
Messages
5,598
would be a serious miscalculation by trump, the EU would have good cause to refuse airworthiness cert from FAA in respect of the 737max going forward which would seriously fnck boeing
Plus China:

 

Pabilito

Well-known member
Joined
Feb 24, 2008
Messages
5,598
I read something about it. It was to do with the assessment of how dangerous a failure of the system would be. It was considered that a failure would not be catastrophic, and I'd accept that is a mistake the FAA are partly responsible for. So, when the risk was lower than this level, only one sensor was required.

But to address your first question, the answer is obviously yes. They didn't set out to design something dangerous. But, maybe they can be accused of cutting costs to the bone, and barely meeting (inadequate) standards.

The MCAS was there to prevent a stall, but it’s failure was not considered to be catastrophic..

Jesus wept...what kind of people consider the stall of a commercial aircraft with 150+ passengers not to be catastrophic?.. and what kind of people even entertain this MAXimum bullshyte.
 
Last edited:

Orbit v2

Well-known member
Joined
Dec 8, 2010
Messages
11,563
The MCAS was there to prevent a stall, but it’s failure was not considered to be catastrophic..

Jesus wept...what kind of people consider the stall of a commercial aircraft with 150+ passengers not to be catastrophic?.. and what kind of people even entertain this MAXimum bullshyte.
The MCAS was there to prevent a stall automatically, without any intervention from the pilots. That doesn't mean the pilots weren't able to recover from the stall in other ways, eg by pushing the stick forward, especially if they knew about this particular characteristic of the plane, which they didn't.
 

riven

Well-known member
Joined
Oct 4, 2007
Messages
2,190
The point I'm making is that regulators like the FAA can't be expected to find bugs in software. There are aspects of this which they should have questioned, eg the reliance on one sensor. So, I'm not sure what exactly you are disagreeing with,
Not bugs exactly no. However, they should be able to see very quickly, that the proposed safety cannot work out. Relying on a single safety instrument will not provide adequate protection unless the frequency is set absurdly low. Bugs don't matter...
 

General Mayhem

Well-known member
Joined
Sep 20, 2011
Messages
11,207
I read something about it. It was to do with the assessment of how dangerous a failure of the system would be. It was considered that a failure would not be catastrophic, and I'd accept that is a mistake the FAA are partly responsible for. So, when the risk was lower than this level, only one sensor was required.

But to address your first question, the answer is obviously yes. They didn't set out to design something dangerous. But, maybe they can be accused of cutting costs to the bone, and barely meeting (inadequate) standards.
If your final sentence is proven to be true then your penultimate sentence is proven to be false.

I’m still unclear as to the role of the FAA in setting or monitoring standards for aircraft design.
 

Pabilito

Well-known member
Joined
Feb 24, 2008
Messages
5,598
The MCAS was there to prevent a stall automatically, without any intervention from the pilots. That doesn't mean the pilots weren't able to recover from the stall in other ways, eg by pushing the stick forward, especially if they knew about this particular characteristic of the plane, which they didn't.
A stall is almost always a catastrophic event:

"Only at very high altitudes can experienced pilots succeed in regaining control over such a falling plane." ..

"Especially when climbing, such situations almost always end in a crash. Commercial aircraft are most frequently involved in accidents at this phase of the flight."



Aircraft are rarely are at risk of stalling during normal high altitude flight between takeoff and landing while flying straight and level however it’s the critical phases of takeoff and landing that exposes them to the risk of stalling.. they are also more at risk during turning maneuvers and particularly in emergency avoidance maneuvers.. for example the Air France crash where the pilots became disorientated and seemingly couldn’t distinguish between flying and stalling.

There’s also the high altitude airports where the MAX was seemingly deemed unfit for use because of it’s exceptional tendency to stall during takeoff and landings.

Boeing Has Called 737 MAX 8 'Not Suitable' for Certain Airports:
 
Last edited:

Nebuchadnezzar

Well-known member
Joined
Mar 15, 2011
Messages
10,770
A stall is almost always a catastrophic event:

"Only at very high altitudes can experienced pilots succeed in regaining control over such a falling plane." ..

"Especially when climbing, such situations almost always end in a crash. Commercial aircraft are most frequently involved in accidents at this phase of the flight."



Aircraft are rarely are at risk of stalling during normal high altitude flight between takeoff and landing while flying straight and level however it’s the critical phases of takeoff and landing that exposes them to the risk of stalling.. they are also more at risk during turning maneuvers and particularly in emergency avoidance maneuvers.. for example the Air France crash where the pilots became disorientated and seemingly couldn’t distinguish between flying and stalling.

There’s also the high altitude airports where the MAX was seemingly deemed unfit for use because of it’s exceptional tendency to stall during takeoff and landings.

Boeing Has Called 737 MAX 8 'Not Suitable' for Certain Airports:
Stalls are not “almost always a catastrophic event”. Stall recovery is a basic flying skill and all pilots train for it from the early stages of their basic training. Commercial pilots practice stall recovery in their simulator checks. It’s such a fundamental flying skill that the pilots response should be a semi instinctive. It is not a particularly difficult situation to recover from if responded to promptly. This is generally the case.....one caveat is that it depends on the aircraft. The stall characteristics of different aircraft types are different....but in general most civilian aircraft types have relatively benign stall characteristics.

If a stall is not responded to correctly and promptly it can result in the aircraft going well beyond that critical angle of attack.....recovery from that position can be very difficult and in extreme cases it can be irrecoverable.

Aircraft can stall at high altitude....hence the term ‘coffin corner’....this being the corner of the flight envelope where the margin between max speed/Mach and stall speed becomes very small. An aircraft experiencing severe turbulence or shear at high altitude can enter a stall. Hence pilots flying heavily loaded aircraft at higher altitudes pay close attention to potential turbulence and this speed margin when they decide which cruising level to accept. In such conditions they may decide to fly at a lower level than planned. Stalls at low altitude, shortly after take off or before landing, are more critical because of the proximity to terrain.
 
Last edited:

Pabilito

Well-known member
Joined
Feb 24, 2008
Messages
5,598
This issue is a lot bigger than the simple fix that some posters here seem to think it is:


"Boeing will have to answer for the design flaw that is at the heart of the controversy surrounding the 737 Max in the coming weeks and months."

 
Last edited:

riven

Well-known member
Joined
Oct 4, 2007
Messages
2,190
I read something about it. It was to do with the assessment of how dangerous a failure of the system would be. It was considered that a failure would not be catastrophic, and I'd accept that is a mistake the FAA are partly responsible for. So, when the risk was lower than this level, only one sensor was required.
If that is true, then the FAA should be hammered. I have done the maths and you cannot get out of it, using a single input. The reliability of FAA authorisations now comes into question.
 

riven

Well-known member
Joined
Oct 4, 2007
Messages
2,190
I am getting a clearer picture so I will try and give a run down of how I see it. I will be drawing on my process safety experience.

Boeing designed a plane and however it arose, they realise that the design made a stall event more likely. Here, they have two choices. Eliminate (by redesign) or install a safety system to protect against the event. Both can be valid approaches and Boeing went with the latter.

From here they should have conducted a safety study, I use layer of protection analysis LOPA. In this you assign a frequency on the event (stall causing crash), hopefully based on data. An immediate issue is this plane design has not had much flying hours. It would be interesting to see what data they biased their frequency number on, if they used the older model as guidance strictly or played it more safe. Say assign a frequency of the event at one in ten years or 0.1 (it is now 0.2 BTW)

You also assign a consequence rating. In LOPA, this would be the top level, fatalities greater than five. Your justification for this is in an system with no protections, can the failure occur due to the design or operation of the thing? That is a yes, so fatalities greater than five or "offsite" fatalities (plane crashes into a building say), is clearly possible. Typically the tolerable frequency we would be looking at would be 0.000001, though given the consequence, you might want to go for 0.00000001.

You then look at your protective layers, i.e. pilot training, basic computer controls, MCAS (which I am assuming is an independent control system to the basic controls), + others and assign a likelihood of failure on demand to these. Also the rating of the MCAS system would be interesting to see. I cannot see how a single input/instrument system can be granted a greater reliability of 0.1, i.e. one failure every 10 years. The training you might accept 0.01 and the basic control (if independent), you would accept 0.1. So that is 0.1*0.1*0.01*0.1 = 0.00001.

This is why I say the maths cannot work. At best, using the least conservative numbers for frequency, I am borderline needing a safety rated system. So good practice, install a safety rated system that is independent, reliable and cost effective. Sure there may be better data and there maybe some mitigation, but we are not going to close the gap with a single instrumented system, no matter how robust it is.

The designer should have seem that, and has primary responsibility. The regulator should also have seen this, and also took account for best practice in the industry, whereby angle of attach sensor is standard for more stable air craft.
 
Last edited:

Nebuchadnezzar

Well-known member
Joined
Mar 15, 2011
Messages
10,770
I am getting a clearer picture so I will try and give a run down of how I see it. I will be drawing on my process safety experience.

Boeing designed a plane and however it arose, they realise that the design made a stall event more likely. Here, they have two choices. Eliminate (by redesign) or install a safety system to protect against the event. Both can be valid approaches and Boeing went with the latter.

From here they should have conducted a safety study, I use layer of protection analysis LOPA. In this you assign a frequency on the event (stall causing crash), hopefully based on data. An immediate issue is this plane design has not had much flying hours. It would be interesting to see what data they biased their frequency number on, if they used the older model as guidance strictly or played it more safe. Say assign a frequency of the event at one in ten years or 0.1 (it is now 0.2 BTW)

You also assign a consequence rating. In LOPA, this would be the top level, fatalities greater than five. Your justification for this is in an system with no protections, can the failure occur due to the design or operation of the thing? That is a yes, so fatalities greater than five or "offsite" fatalities (plane crashes into a building say), is clearly possible. Typically the tolerable frequency we would be looking at would be 0.000001, though given the consequence, you might want to go for 0.00000001.

You then look at your protective layers, i.e. pilot training, basic computer controls, MCAS (which I am assuming is an independent control system to the basic controls), + others and assign a likelihood of failure on demand to these. Also the rating of the MCAS system would be interesting to see. I cannot see how a single input/instrument system can be granted a greater reliability of 0.1, i.e. one failure every 10 years. The training you might accept 0.01 and the basic (if independent), you would accept 0.1. So that is 0.1*0.1*0.01*0.1 = 0.00001.

This is why I say the maths cannot work. At best, using the least conservative numbers for frequency, I am borderline needing a safety system. So good practice, install a safety system that is independent, reliable and cost effective. Sure there may be better data and there maybe some mitigation, but we are not going to close the gap with a single instrumented system, no matter how robust it is.

The designer should have seem that, and has primary responsibility. The regulator should also have seen this, and also took account for best practice in the industry, whereby angle of attach sensor is standard for more stable air craft.
The 737 Max has 2 AoA (Angle of Attack) sensors. The Airbus A320 family has 3. The data from the AoAs feed into several flight control computers which serve a wide range of functions....just one of which is stall warning. One of the major issues in this case is that the MCAS software only works on data from a single AoA source.....a bizarre design decision. The issue is not really about the AoAs, it’s about the decision to rely on just a single data source for MCAS....no redundancy. I can’t think of any example of a major system in modern public transport aircraft that has no redundancy.
 

Pabilito

Well-known member
Joined
Feb 24, 2008
Messages
5,598
Stalls are not “almost always a catastrophic event”. Stall recovery is a basic flying skill and all pilots train for it from the early stages of their basic training. Commercial pilots practice stall recovery in their simulator checks. It’s such a fundamental flying skill that the pilots response should be a semi instinctive. It is not a particularly difficult situation to recover from if responded to promptly. This is generally the case.....one caveat is that it depends on the aircraft. The stall characteristics of different aircraft types are different....but in general most civilian aircraft types have relatively benign stall characteristics.

If a stall is not responded to correctly and promptly it can result in the aircraft going well beyond that critical angle of attack.....recovery from that position can be very difficult and in extreme cases it can be irrecoverable.

Aircraft can stall at high altitude....hence the term ‘coffin corner’....this being the corner of the flight envelope where the margin between max speed/Mach and stall speed becomes very small. An aircraft experiencing severe turbulence or shear at high altitude can enter a stall. Hence pilots flying heavily loaded aircraft at higher altitudes pay close attention to potential turbulence and this speed margin when they decide which cruising level to accept. In such conditions they may decide to fly at a lower level than planned. Stalls at low altitude, shortly after take off or before landing, are more critical because of the proximity to terrain.

Your opinions are very much at odds with the expert ones in the article I linked.

I was referring to stall in relation to commercial aircraft. Of course there are military aircraft that push those limits to dangerous extremes and practice real life recovery.. I doubt you have ever had to recover a real commercial aircraft with passengers onboard from a stall.. apart from in a virtual reality simulator?
 

Pabilito

Well-known member
Joined
Feb 24, 2008
Messages
5,598
The 737 Max has 2 AoA (Angle of Attack) sensors. The Airbus A320 family has 3. The data from the AoAs feed into several flight control computers which serve a wide range of functions....just one of which is stall warning. One of the major issues in this case is that the MCAS software only works on data from a single AoA source.....a bizarre design decision. The issue is not really about the AoAs, it’s about the decision to rely on just a single data source for MCAS....no redundancy. I can’t think of any example of a major system in modern public transport aircraft that has no redundancy.

In all walks, most accidents result from a coincidence of two or more unusual circumstances.. i.e looking at your phone while a car pulls out in front of you. Here you have three: Firstly a design flaw rendering the aircraft more susceptible to stall, secondly the resultant kludge fixup and thirdly the kludge itself was a bizarre botched one... maybe even a fourth one where the AOA sensor had an extremely low MTBF (Mean Time Between Failure) rating.

My point being that the basic inherent design flaw renders the MAX susceptible to just one other unusual occurrence for an accident to occur.
 

riven

Well-known member
Joined
Oct 4, 2007
Messages
2,190
My point being that the basic inherent design flaw renders the MAX susceptible to just one other unusual occurrence for an accident to occur.
That is true but it can be acceptable. You seem to be saying that it is absolutely unacceptable but we can only say that when we know how the frequency of stalling has increased due to the design, and if it possible to protect against that increase. Taking an absolute lack of acceptance would mean not flying, as all planes are "flawed" in that some will eventually crash. Risk can be managed but not eliminated.

In all walks, most accidents result from a coincidence of two or more unusual circumstances.
Untrue. Most accidents are predictable and preventable. And they are not because of unusual circumstances but due to normal circumstance, where some protection system has failed. This is a key foundation of safety literature and the practices and theory of risk management.

In your example you have the failures of 1) looking at your phone and 2) a car pulling out irresponsibly. Both are common events, not unusual and easily preventable.
 
Top