Either choose an existing mocking solution or figure out a way to interface with all those different mocking libraries out there (ask the community for guidance). Implement a facility for creating mocks based on live objects that were gathered during code tracing.
Mock generation should take into consideration (1) the object being mocked and (2) the context of the test case. Context is important for readability reasons. For example, if we captured a following series of three calls to an object:
we want different mocks depending on which call we'll be testing. When testing call to "second" method we only have to mock "first". When testing call to "third" we have to mock both "first" and "second".
A strategy that you may want to consider is using a mock-and-verify library like Dingus or voidspace mock. Mock objects from these libraries default to returning additional mocks when attributes are looked up or methods are called, so there is no "record phase".
The call list stored by these mock objects could be written to the autogenerated test case and modified as needed for more complicated mocking.
This strategy isn't fully automatic, and requires quite a bit of user intervention. The general difficulty with mocks is using the correctly to keep the specification of the interface (the actual "end user visible functionality") separate from the internal implementation. I can't think of an easy way to keep them separate without having a human inspect them.
* Blueprints in grey have been implemented.