Session E-ERR

Debugging and Error Handling in an Object-Oriented Environment

Lisa Slater Nicholls
SoftSpoken


Abstract

VFP will make some changes in the way your applications handle errors, and the way you cope with them during development, too. This session will concentrate on code strategies to log and respond to errors in applications. It will also take a look at the FoxPro debugging tools to see how they "react" to the changes in the Fox language and event model. We'll emphasize:


Handling errors with a new VFP strategy

Visual FoxPro is so full of new features that it's difficult to know how to integrate what we already know with the vast amount of learning we have to do.

Error handling is a good example of this problem.

We all have tried-and-true methods of debugging and capturing error information in our applications, but suddenly the new event, object, and data models threaten to invalidate all our techniques.

To add to our problems, the error handling capabilities of VFP have not changed as much as other parts of the product We need to improve our legacy error handlers to meet increased needs, but we're still working within a legacy error system.

This legacy system may be a blessing in disguise! It turns out that our current error handlers can be adapted quite gracefully to work with VFP's simple error enhancements, and they are adequate to the task of monitoring VFP's radically-changed features. Meanwhile, you can breathe a sigh of relief knowing that in this one area you don't have to change everything you do to work in Visual FoxPro.

Here's how it works: Object can handle their own errors internally, using an Error( ) method that is common to all the base classes. When an error occurs during the execution of an object method, VFP triggers an error event. Naturally, this invokes any code you've placed in the object's Error( ) method.

To write this code, be aware that VFP passes the Error( ) method three parameters automatically: the error number, the name of the object method in which the error occurred (or the name of a UDF, if your method called out to one and the UDF contained the error), and the line number on which it occurred. The decisions you make at that point are fairly straightforward; you will model them on previous error handling you've done in previous versions of FoxPro.

However, just like other methods, the Error( ) method does not fire if you don't write the code. Instead, it inherits behavior from its parent class. If you write no Error( ) method code anywhere in the class tree, the base classes respond by invoking your ON ERROR procedurer instead. (If you don't have an ON ERROR routine in force, VFP's default error mechanisms will fire.) Therefore, if you don't add any internal error handling, your ON ERROR routine should handle everything, as usual.

This is tremendously convenient, because many error-handling behaviors are not specific to a particular base class. For example, we want to offer the user a chance to exit gracefully, and we want to preserve details about the error in a log. This sort of mechanism is as important to an error in a command button method as it is to an error in a timer. It would be silly if we had to write it (and maintain it) in all our class trees.

Yet, as you realize, some error handling will belong specifically to a baseclass, or even to a subclass. For instance, ole controls (OCXs) need to worry about whether their server applications are properly registered on the target machine; command buttons do not. Similarly, a form that performs data updates must be concerned with locking and contention errors, but a form which allows the user to print to file should be more interested in possible diskspace problems.

The answer, as you'll see, is to properly classify errors and handle them in the right place, at the right level. Object Error( ) code can be object-specific, and yet it can invoke outside error handling devices when necessary. In this scenario, the ON ERROR handler functions as the "court of last resort", when more immediate and knowledgeable coping devices have been exhausted. The ON ERROR routine also provides common services that almost all error situations require.

It's useful to pause a moment, here, to think about how ON ERROR got to be the kind of command it is. ON ERROR dates from a time before Xbase had anything resembling "event-driven" or "interrupt-driven" capabilities built into it. For a while, ON ERROR was the only "event" that could happen at any moment and interrupt the procedural flow of Xbase applications.

Therefore, the ON ERROR routine provided a place to take care of events that happened in a system that weren't really errors at all. For example, when a user tries to edit a record and can't get a lock, this is really not an "error". Yet Fox designers were able to make use of the ON ERROR ability to happen at any moment to implement automatic locking capabilities they couldn't arrange otherwise.

You can see, with this perspective, that making ON ERROR do less work makes sense now. We have many other interrupts and events, and we can deal with problematic situations in their proper place. Yet maintaining and updating our ON ERROR routines to handle new VFP capabilities gives us fallback security. We don't have to identify and code for every error in every object to cope with unknown situations.

Some of you are probably wondering why I'm talking about an ON ERROR routine instead of an Error Object that could provide the generic, fallback error handling capabilities needed throughout an application.

You're right – it might make sense to design a class whose sole job is to handle errors.

In this approach, the error object would do much the same job as ON ERROR does now, but (like all objects) it would be better protected than our current ON ERROR procedures. We wouldn't have to worry about the scope of any variables we'd need to create while evaluating or handling error conditions if they were object properties. We wouldn't have to worry about the scope of like-named procedures, either, if we could simply invoke the error object's methods. We could subclass error handling techniques for different needs, according to any system we wanted, without rewriting the code. .

You'll find this approach used in other languages. For example, the Delphi class tree includes an Exception class and a clearly designed hierarchy of error types underneath (according the resources requiring protection and other exceptions which may occur). You can subclass these error classes to cover user-defined objects, so the error handling is infinitely adjustable to your object and class set. For example, some portions of the Exception class tree are designed to cleanup and exit, as is appropriate for various programming errors, and others are designed to handle and continue running, as you might want to do after an attempt to open a non-existent file.

Delphi also allows the programmer to specify some behavior to happen no matter what else occurs and what other processing is interrupted. For example, freeing memory used by an object should always happen, even if the object's other tasks are aborted halfway through. Meanwhile the error can be caused to remain visible after this special cleanup is done, so that it can be handled by outer objects and processes if it wasn't a type if error specific to this object.

We can learn a lot about designing our own error handling reactions looking at a system like this. (I am indebted to Steven Black for pointing out the Delphi docs to me!)

We can emulate this functionality without using an error object, however.

In spite of the attractiveness of objects from a scoping point of view, I think we have little to gain by DEFINEing an Error class AS CUSTOM and building our error-handling functionality into an object system. In fact, we have quite a bit to lose.

A custom error class introduces more memory use and more possible errors – the very last thing you want when you are in the midst of an error event already! Just think what would happen if you tried to instantiate the error class if you were already low on system resources.

Let's take one simple example: many of you may be using Tom Rettig's ENVLIB system of saving and restoring environment options using a custom class. This system has a tremendous advantage over saving and restoring options in procedural code, because you can attach the ENVLIB objects to containers (forms or other VFP objects).

When the container goes out of scope, the ENVLIB objects are destroyed automatically, and they re-set the environmental options for which they are responsible automatically, as they are destroyed. You don't have to write any code to do it. Meanwhile, you can scan your ENVLIB object-creation code for any container and instantly see how you have set up that container's environment.

But if I used a system like this within an error-handling process, whether object-based or procedural, I'd run the risk that each ENVLIB object (for each environmental setting) I added would increase the burden on the general system. Moreover, if my problem was a file use or locating error, perhaps my ENVLIB.VCX would not be available either, or perhaps it would be corrupted.

Bottom line, it just isn't worth the risk.

You'll notice that my error handling is designed to use only internal VFP features (no forms, no library calls, as well as no objects). I'm happy that we can now use MESSAGEBOX( ) to make the display of error dialogs a little more flexible and attractive; before this, I would have used WAIT WINDOW TO <memvar> almost exclusively, as a way of limiting possible problems from custom code.

We can easily imitate Delphi's exception-handling features without building them into objects at this point. The secret, again, will be to design our error handling at the proper level, using minimum resources and functionality available now in VFP.

We might put critical cleanup code in the object's Destroy( ) method, where it would be sure to fire (to imitate Object Pascal's try... finally block). We can also have our objects' error methods delegate back, after handling their own specific problems, to let other entities in the system continue the work, just as Delphi allows exceptions to remain viable.

We can even look at the kinds of messages Delphi exception objects are prepared to accept and send, and at its error-classification exception class hierarchy, and use these as a model for our own error system features.

Readers of my 2.x books will realize there's a pattern here. Before OOP was available in FoxPro, I looked for methods to imitate the OOP qualities of encapsulation, polymorphism, and even inheritance, that we wanted for robust and re-usable code. Now, I've thrown away those methods because they're not needed in VFP – yet the same habits of thought and design that served well in 2.x direct my 3.0 class design strategy and make life easier.

We're in an analogous position with error handling in 3.0. We can go only so far in the OOP direction we want to go, without imposing dangerous and useless overhead. Yet we can design with the goals that OOP teaches us. When we discard our implementation in future releases, our design strategy will still be useful and re-usable.


Implementing the strategy

The most crucial part of implementing a well-designed error strategy will be proper classification of errors, according to several different structures:

The first classification system is the most profound. There are three kinds of errors you need to handle completely differently in your applications, according to when they occur in the life cycle of an application..

The first error type could be called "brain-dead programmer errors", resulting from mis-use of your own code. That is, you write code dependent on certain assumptions and then you use the code having forgotten what those assumptions are.

For example, you write a generic function that requires four arguments, and you pass them in the wrong order or add a fifth. Or you create an .APP as a module of a larger program, and then mistakenly try to execute it from the command line.

When your code assumes certain conditions, these assumptions should be thoroughly documented in your code, and of course you should write tests into the code that make sure the assumptions are warranted before the procedure continues to work. These tests take place at the top of any routine, and therefore serve as automatic documentation of the procedure's assumptions.

You probably should test for all arguments passed to every function you write, for only one obvious example.

I used to write TYPE( ) checks for every parameters in my code. If the check failed, I sometimes returned an error to the calling program and sometimes supplied a default value.

This is a common practice, but it's not a good one. Default values allow the calling programs to be "lazy" about the way they use subroutines, but they are dangerous. Returning an error to the calling program requires the caller to handle the error condition, and it's quite a burden to write that kind of error-handling into every call in every program you write.

Instead, I've borrowed a C programming practice called assertions. If a program's requirements or inherent assumptions is not observed by another program using its services , the program deserves to fail immediately. I don't want these programming errors compensated for with default values, so that I might not realize my error.

I don't want to handle them in code that remains in the finished application either. It's inefficient overhead; all those TYPE() checks are quite slow, and they're just handling problems that shouldn't be left in the finished product in the first place.

Error handling by assertion is removed from the finished product by the use of #DEFINEs. Here's the way my #DEFINEd assertion lines look in my #INCLUDEd header files:

Notice the final comma on the DO M_Assert line above. This #DEFINE is really half a procedure call. Now I write code like the following as assertion tests in my code:

As you can see, when I change the value of _DEBUG and recompile, the resulting object code will contain one of the following two statements:

(In both lines above, the ¬ symbol is meant to show you that there is no carriage return or line break in the code, although the full line won't fit on the page here.)

The actual M_ASSERT.PRG is fairly straightforward, evaluating any condition I place on the _ASSERT line, providing a dialog with information, and CANCELling if the assertion fails.

Writing assertions like this at the top of every function becomes second nature, like writing a function header, and the _ASSERT line serves to tell me exactly what my functions expect from their environment. Of course, not every assertion tests a parameter, but I find it useful to group them at the top of the function all the same.

I use CEE (Cobb Editor Extensions, by Ryan Katri and Randy Wallin) to make it easier for me to write the assertions into my code. With CEE, all I really have to is type a single keyword indicating that I'm ready to write an assertion. CEE presents me with dialogs asking for the assertion condition and the message for the dialog M_ASSERT will present if the test fails. The _ASSERT line then appears in my code.

If you've used CEE in FoxPro 2.x you may not know that CEE3 now has a way of looking at parameters in a method or procedure, and re-iterating a macro for each parameter it finds. This means I only have to type my PARAMS keyword once to write all my standard parameter-checking assertions.

CEE is a shareware program; I've placed a copy on your source disk. However, CEE doesn't come supplied with the relatively elaborate macros I've written to aid my personal assertion techniques, so I've placed them in the README.DBF engine behind LSN_DEMO, the app that allows you to run my sample code. You'll find my CEE macros in the Assertions for Predictable Errors README.DBF topic, which also examines and demonstrates M_ASSERT.PRG and the header files containing the related #DEFINEs.

As you continue to explore LSN_DEMO, you'll also find many working examples of assertion use in the functions and methods in the Master Class Application Framework that form the base of my session source code.

Useful as assertion techniques are, they solve only one piece of the puzzle. What about errors you can't know about in advance, because they don't arise from programming assumptions of which you are unaware? What about all the simple logic errors you may make, some of which will inevitably remain in the application you distribute?

You simply can't avoid errors you can't predict in advance. Unpredictable errors are the ones you need to log in a table for later examination by developers, and for which you need to provide a choice of exits for your users. The README.DBF Topic A Global ON ERROR System introduces you to my ON ERROR handler, M_ERROR.PRG. Your ON ERROR routine is the right place for errors you don't know about and so, by definition, cannot be limited to specific objects. The README.Topic Developer's Use of ERRORLOG.DBF gives you a quick introduction to the error log M_ERROR produces when appropriate.

 

Of course, even in a global handler some additional classifying of errors does occur. M_ERROR includes a lot of code that looks like this:

... followed by CASEs that are tailored to some broad categories of error handling. For example, if the error is a diskspace problem I won't attempt to write a log! It's important to identify certain errors as "fatal" and create a quick exit.

M_ERROR's list has been updated to allow new VFP categories of error handling, such as ODBC errors and OCX errors. Anything I have not classified in this list remains in a broad OTHERWISE case of "programming errors", destined for logging at the developer's discretion. This classification system allows a quick tailoring of the ON ERROR response to specific and significant problems while not forcing you to create a separate CASE for every error number in the extensive VFP system.

Not all errors can be avoided simply by fixing your code. Some are hardware and configuration failures that will be "caught" on the global ON ERROR level, because they are completely unpredictable, just like logic errors of which you are unaware. Otherwise can be predicted for certain situations and certain objects, such as a diskspace error on a procedure that prints to file.

The second type, predictable yet unavoidable runtime errors, should have their error handling localized to the procedure or object that will cause these errors.

In addition, there are errors which will only occur when you use certain services outside VFP, such as a purchased library or driver or OCX. These errors may be unavoidable (such as the service not being appropriately installed) or they may be logic errors (if you address the service incorrectly in your code). Error handling for both these types of errors should be localized to the procedures or containers that you use to contact these special services. There is no need to build error handling for the errors of a specific OLE control into your global error handler. (You still build generic OLE control error handling into the ON ERROR routine, just to cover cases you haven't thought of.)

LSN_DEMO's LSN_OLE.SCX shows you a simple example of localized error handling, in a form that includes an instance of the Outline control. In this case, I built in a deliberate error in my code, but you might get additional errors if the Outline control is not registered in your copy of Windows. See the topic Delegate Error to Container Method & Up in LSN_DEMO.

When you examine LSN_OLE.SCX, you'll notice that the OLE object itself does not contain the method code to "react" to the OLE-specific error. Instead, I built this behavior into the Error method of the form:

 

All objects on this form can now delegate to their container to handle OLE errors, instead of my addressing OLE-specific errors in the Error( ) method of each object. After all, any object on the form might want to refresh the outline control with new values, and cause an OLE error in the process.

How does this delegation occur? LSN_OLE.SCX is based on classes you'll find in M_BASE.VCX. My "ancestor" versions of each base class, of which I have only included a few samples here, have error handling built in. Following the Delphi model and some good common sense, I've determined that all objects should call outward to their container to handle errors that are not particular to them. Here's what the base code looks like for oleMaster class and all my other ancestor classes:

 

All my classes will look "outwards" by default to get the error handler built into the topmost parent container, passing VFP's automatic arguments as they go.

I will probably build a similar system to delegate HelpContextID requests. In other words, if an object has no help topic of its own, check to see if its container does. I don't have to worry about looking upwards at the class hierarchy here, since if I have no assigned an explicit overriding value to the HelpContextId property, the object will automatically have inherited its class's value for this property.) If I get all the way to the top-level container and still have found no appropriate help topic, I can SET TOPIC TO an appropriate topic or topic ID, using CASE statements that evaluate the user's current situation just as I did in FoxPro 2.x

The VFP help system, like its error handling system, has been somewhat upgraded but is not as thoroughly object-oriented as the rest of the product, which may account for the similarities between the strategies appropriate to the two systems, for this release of VFP.

However, this approach (looking outwards to parent containers) has one flaw: I can't stop you from using my ancestor classes in containers that don't follow my rules. If the container doesn't have any error handling, nothing at all will show up as an error. Therefore, in _DEBUG mode, I've built in an extra error message to cause something to make the error show up.

EXEC_OBJECT_ERROR_MESSAGE is defined as a WAIT WINDOW in M_HEADER.H, for easy change if I decide a different sort of alert is appropriate:

The #DEFINEs also provide an easy path to localization. (You'll find that all the error messages in my ON ERROR system are #DEFINEd in M_STRING.H for easy localization.) You might prefer to use a table to drive your string messages, rather than #DEFINEs, because the #DEFINEs require re-compilation for each language version. However, they offer superior execution speed, and simplicity of editing, so I use them in preference to a localizing STRINGS.DBF.

EXEC_OBJECT_ERROR_MESSAGE won't provide protection or true error-handling, just a visible reminder in _DEBUG mode in case you use my classes outside my hierarchy. (As long as you have even the simplest code in your containers' error methods, this message is probably redundant, but it doesn't hurt anything and disappears except when _DEBUG is #DEFINEd .T..)

On the other hand, the second #DEFINE, EXEC_ON_ERROR_IN_ERROR_METHOD, is very significant. This statement always executes if the object has no parent (is a top level container). If we've proceeded all the way up to the top-level object and still have not handled a particular error anywhere along the way, EXEC_ON_ERROR_IN_ERROR_METHOD invokes the failsafe global ON ERROR routine.

Here's the way I normally invoke my ON ERROR routine from procedural code:

 

In my header file, the following line #DEFINEs my alternative calling syntax for invoking the ON ERROR routine explicitly from within an object Method::

 

Here, you can see the way the arguments VFP passed to the Error method are being relayed to M_ERROR for its use, just as I passed them all the way up the container tree.

The chain can stop at any time, if any container "understands" and wishes to respond to the error. Meanwhile, as you have seen in LSN_OLE's form, a container may understand some specific errors and yet wish to provide generic handling for all other purposes. In this case, it simply delegates to its parent class in the normal VFP way (such as frmMaster::Error(nError, cMethod, nLine for LSN_OLE's form). Its class will continue to check "upwards" for parent containers, without your needing to write this code again.

If you examine my frmMasterEdit class in M_BASE.VCX, you'll see another variation you'll find useful:

You can delegate back up the parent class tree in the same CASEs that require custom error handling for a subclass. For example, frmMasterEdit (which is designed for data entry) may take care of a record update problem locally but still delegate back to frmMaster to perform normal error logging procedures.

The local handling ensures that the record will be properly reverted (no user cancellation is available at this point), so that the data session will close properly and VFP will not worry about uncommitted changes if the user attempts to quit. However, the global handling makes sure that you still get a log of this potentially serious error. This type of double-action is usually referred to as augmenting rather than overwriting the behavior of the class.

You've now seen two error classification systems at work, each complementing the other.

On one side, you must be concerned with when errors should occur, distinguishing those errors you can predict from those errors you can cure and from those errors you must cope with because of runtime conditions.

On the other side, you must decide to what level of ownership an error belongs. Should it be handled by an object, overriding or augmenting the behavior of its class? Should the object or class error handling get additional help from containers which define error responses for the process to which the object belongs? Should you call in the generic capabilities of the ON ERROR routine, to augment or supplant object- and class-specific behavior?

When you examine LSN_ERR.SCX, you'll get an example of putting these techniques together in a real world data entry situation. In addition, LSN_ERR is an example of the "overloaded" error handling system, because it handles both locking contention problems and "real" errors.

If you DO LSN_ERR, the demo file will put up two sessions of the LSN_ERR data entry form. This particular form is designed for optimistic locking (the most complex type from an error handling point of view), although the frmMasterEdit on which it is based can handle any type of locking, as long as all tables are buffered. (LSN_ERR is a one-to-many editing form, with the parent using optimistic row buffering and the child table using optimistic table buffering.)

You'll need to attempt edits and committs on both forms, back and forth, for a little while, to see how the strategy plays out. Although the form is supposed to use optimistic locking, I've added a button to allow you to lock tables explicitly, just to see how the forms react. See the Data Entry Update: All Error Elements topic in LSN_DEMO.

 

LSN_ERR.SCX gives you a good illustration of why you should go to the container for error handling. Consider a button on the form that changes data and violates a rule. You certainly don't want business rule handling to take place on the button-level; the entire form should monitor the error and provide a response, since it's very likely that more than one control on the form will cooperate in the editing process. Meanwhile, the frmMasterEdit class sensibly delegates up to its container, if there is one, ending up with ON ERROR handling of errors that are none of its business, as usual.

Pay particular attention to the QueryUnload( ) method of the frmMasterEdit form class, which will show you how to make your document windows follow standard Windows practices when the user tries to close an application in the middle of one or many edits. (My QueryUnload( ) will behave just as WinWord does when the user tries to quit. Each document that has undergone editing since its last Save will individually ask the user whether s/he wants to save or discard changes, or cancel. In the last case, the application will not end.

You may not think of this QueryUnload( ) behavior as error handling per se. But it is really a pre-emptive or pro-active form of error handling, making sure the user gets the results s/he has learned to expect in the Windows environment while protecting data.

LSN_ERR.SCX also gives you a skeletal illustration of how you'd subclass and override to handle specific data needs. FrmMasterEdit has an error handling CASE for trigger failure that reverts the record values. You might subclass this form class and provide trigger-failure handling that fixes the record values and goes on with the update, instead of reverting, for certain circumstances. Meanwhile, all other error CASEs would delegate back to the standard frmMasterEdit behavior.

FrmMasterEdit shows you a good working set of the new commands and functions you can use to manage error handling. In particular, you will enjoy the AERROR( ) array function, which enables us to capture and monitor error information. You'll also see me use the ERROR command to explicitly force an error when a TABLEUPDATE( ) has failed for some reason. When TABLEUPDATE( ) returns .F., I use AERROR( ) to find out why the error occurred, and send the error on to the Error Method( ) using the ERROR command to get there. (Check the frmMasterEdit custom Update( ) method to see how this works.)

ERROR <expN> <expC> is more versatile for testing purposes than my examples show. The second, character-type argument can be used to supply the SYS(2018) error message parameter so that your explicit error-invocation has all the characteristics of a real error. You can also force the VFP error dialog to appear, showing a message of your design, generating a true "user-defined error". Our ability to access this dialog is as close as we get to having a native, manipulable error object like Delphi's.


What about the new event model?

We've been concentrating so hard on objects that we may have lost sight of one of the other major changes in VFP, and its impact on error handling. Yet the two problems are intimately connected.

The central problem posed by the new event model is this: all forms can be active in the interactive environment, without the need for a wait state enclosing an application the way GETs used to require a READ to be "live".

If you use the new READ EVENTS wait state, you have a degree of control of what is going on in your program similar to what we had in FoxPro 2.x. You can design setup and cleaup code you know will execute. If you want to set an ON ERROR command, you can save a previous ON ERROR setting with ON("ERROR"), and you can be secure in the knowledge that you can restore the environment to its previous state on the conclusion of your application.

However, if you just DO FORMs in the interactive environment, whether as little applets for your users or as part of developers' tools, beware of setting states which are not limited to data sessions and which have global influence!

Except for ON KEY LABELs and menus, we don't have stacks on which we can push and pop these settings. Even if we did, we could never guarantee, in this truly event-driven environment, that the applet or state into which the user "landed" after leaving our application would be one from which they entered our application, so we would never know whether we were pushing and popping to the right values.

Consider this scenario: Ken Levy's SUPERCLS toolbar, Andy Griebel's GAPP toolbar, and CEE's modal addin dialogs (macro monitors, and so on) all exist in my environment. If Ken saved an ON ERROR routine and set his own, he might restore the old one on the way out. But it might be Andy's, because I clicked from GAPP to SUPERCLS on the way in.

Meanwhile, I've just invoked CEE's dialog on the way out of SUPERCLS, not Andy's toolbar – perhaps CEE doesn't touch the ON ERROR command at all, and Andy's ON ERROR is totally inappropriate to it! Or perhaps CEE sets its own ON ERROR. Either way, Ken's restoration of ON ERROR is wasted. It's much better for each of the participants in the event driven, interactive environment, to avoid setting an ON ERROR at all.

Instead, look at the way my objects invoke the command I used to reserve for ON ERROR (DO M_Error WITH ...). You can begin to see how your objects and dialogs can do the same thing, invoking the same capabilities of the global error handler whenever their error events are triggered, even though the ON ERROR command has never been used to make it possible.

You can also see this problem as an example of how true encapsulation will become absolutely necessary to us. Global settings of all types, not just ON ERROR, will become inappropriate for us. Each dialog or formset, with its own datasession and SETs, will become something like a session of VFP in itself.

Handling debugging and tracing chores unsupported by a READ EVENTS carries its own problems. You have probably experienced difficulties when you wanted to trace through object code. There seems, at first glance, to be no way to invoke the Trace window, since lines of code are not continually executing. You can't deliberately invoke method code that is supposed to happen in response to events the same way you can choose to DO a program for tracing purposes.

The best solution I've found to this problem is a Debug window breakpoint on <<the method I want to see>> $ LOWER(PROGRAM()). Once this breakpoint is triggered and the method suspends, I can call up the Trace window and step through the problematic code.

If you find yourself doing this a lot, you may also find that your Debug window gets overly cluttered with breakpoints you no longer need. The easiest way I've found to clean up the Debug window is to SET DEBUG OFF, which clears it completely! This works in code or in the Command window. If you happen to have the Debug window available when you'd like to clear it, you can hit "{Ctrl-End}" interactively, which clears the Debug window while closing it, and then re-open it for further use.

One more helpful addition to VFP commands and functions makes it easy to decide when your debugging window tools should be available: VERSION(2) will return 0 when you're executing your application under the runtime version. Now you can decide when to RELEASE the menu options invoking Trace and Debug without parsing VERSION(1)!


Using the LSN_DEMO app and source code

Please take the time to look through the LSN_DEMO examples and Master Class source code thoroughly. In many cases they contain additional comments addressing fine points that are beyond the scope of these notes. The LSN_DEMO app's README file contains additional notes in its Details field, too. LSN_DEMO.PJX should help you organize your exploration of the source code. (You can run either its .APP or the LSN_DEMO.SCX which is the APP's main program, to see the demonstrations. I created the project mostly to Include various files I wanted to make sure you'd have a way to look at the various elements of the examples.)

I hope you have a lot of fun looking through this stuff! I learned a lot preparing it. Remember that in error handling, as in most of VFP, you need patience and a long design period. You will take a long time deciding exactly how best to make use of all the new abilities we've been given. The actual code you have to write to implement your design will be comparatively limited and comparatively trivial.

Good luck with it – and please let me know how you get on.

Lisa Slater Nicholls

Compuserve ID 71333,2565