Author -> billbuchan

Okay. This isn't a coding tip, per say, but will be of most use to developers.

A common scenario, in order to rein in the developers, is to have a separate development environment, where the developers can crash servers to their hearts delight. (Being a developer, this is one of my few job-related joys).

However, there usually is (and certainly should be!) a separate certification hierarchy.

Some people might choose to use "file, tools (or security..they keep moving it!), switch ID", or might even go to the lengths of making custom location documents - one for their "production" ID, and one for their "development" ID.

This is bad:

  • You keep having to switch between environments.
  • You risk attempting to use the wrong ID in the wrong environment
  • You start getting certificates bleeding through your names. nsf (personal name and address book).

All of which are bad.

So - the solution ?

Create separate notes client data directories, and have separate notes.ini files (usually placed in the data directory) for each environment. Then use:

nlnotes.exe =

to start them up on a desktop icon.

For instance, I have:

C:\Lotus\Notes6\nlnotes.exe =c:\notes\data\notes.ini

to start up my production client.

You can then have multiple clients running on your machine, oblivious to each other.

The environments need never see each other. And you can continue doing useful stuff in both clients.

Okay - different versions. How about your production environment is v5, and your test is v6 ? Or the other way around ?

Simple - install the notes Executable client in separate directories, and use the same trick.

But does it work ?

On a good day, I have:
  • One notes and developer 4.6.7 client (dont ask)
  • two notes v5.0.12 clients (development and test of tooling)
  • three notes 6.0.1 clients (production, personal, and v6 app development)

All open at the same time.

So - no excuses. Separate directories, separate clients. Easy, simple, fast, productive. So what are you waiting for ?

It is quite a long time for me to work on Quickplace.Initially, I have really missed the counseling and better resource for a Quickplace over the Internet.We have IBM Quickplace forum, But I have personally experienced a lack of proper response from the forum members.I think Quickplace is not as much as used like a Lotus Notes.So,This could be the primary cause to get Quickplace professionals actively.Few days back, I have got a question saying "How to custimize a quickplace ?" from one of my friend.So, I have decided to write a series of article on quickplace development and Architectural view.So, Here it goes the first part.

What is Quickplace (Quickr )?

Quickplace is a self-service Web tool for team collaboration. Use quickplace to publish, share, and track all information relevant to a project. Teams can use quickplace to store resources (such as files, discussions, and schedules) related to a project in a common place where everyone can access the latest information.
for more information refer IBM Quickplace Site

Architecture overview

Quickplace has its own metaphors and object model independent of Domino, it is implemented using core Domino technology and takes advantage of Domino data structures. A place is created using templates to structure data, and databases to store the data. Information in a place is stored in data notes — the basic unit of information in a Notes database. The structure of a place is further defined with objects such as rooms, folders, and pages that map to Domino objects.

Because the place objects are based on Domino objects, you can use the Notes client and Domino Designer to view, customize, and create new objects in a place.

Quickplace also uses a subset of the Domino/Notes security and authentication model to manage access to a place. It is helpful if you are familiar with the Notes security model, in particular with basic access control list (ACL) settings, and the use of Reader and Author fields.

Relationship between Quickplace and Domino objects

Quickplace file directory structure

Quickplace data is stored within a subdirectory named QuickPlace, below the Domino server's data directory. The complete directory structure is as follows.

It is more than four years for me to use IBM Sametime client for real time chat and communications with the users. I will genuinely say initial 1-2 years , I was not aware about all Samtime capabilities and purpose in real business.As a business analyst (No, I am not) if somebody could ask me to explain the purpose and benefits of Sametime in real world scenario , I could just show my blur face.Today I was just discussing about OCS(Microsoft Office Communication Server) with my technical architect , Just thought to see more about that on net.I got nice real world example to use OCS in your business,I must say it is Microsoft strategies to boost OCS against IBM Sametime.But,It is worth to read for those who really wish to know what is real time collaboration.Here is the contents -

What I’m about to discuss isn’t a new concept. The fact is we all perform real time collaboration on a daily basis in our interactions with other people. First, what do I mean by Real-Time Collaboration (RTC)? or Real-Time Communication as some like to call it? Well I personally consider RTC any interaction with another human being to share and discuss ideas in well… real time. For instance if you call Suzy and discuss today’s meeting agenda, the conversation takes place in real-time. However if you swap 4 emails with Suzy to discuss the same agenda it’s not what I would call Real-Time Collaboration. This is due to fact that you’re sending messages back and forth but never carrying on a true conversation. Am I clear as mud yet? So lets say you swap an average of 4 emails with 3 other people the same day to discuss the same agenda. That means it would take you sending 16 email messages to set the meeting agenda. Now do you understand why your inbox is so cluttered with meaningless messages? If you’re like me you just can’t seem to keep the stupid thing clean (Though I work with someone who never has more than 10 items in his inbox). Now lets talk about time for a minute. How much time did you spend sending those 16 email messages and reading the responses you received? Lets say it took you an average of 2 minutes to send each email message and 2 minutes on average to read each message. So you have spent 32 minutes sending email and 32 minutes reading responses. That’s over an hour spent just discussing the agenda for this stupid meeting. Ok so let’s say you’re the type that you just get frustrated with all the email so you decide to call each of the 4 other people. Oh wait, you mean you have to lookup their extensions? Fine, let’s say it takes you an average of 2 minutes per person to find his or her phone number and dial it. So far you’ve only wasted 8 minutes digging through your drawer for the phone directory. First on the list is Suzy. You pick up the phone and dial her number. Suzy answers and you spend 5 minutes asking her how the weekend went. You quickly discuss your ideas for the meeting agenda and wrap up the call in about 10 minutes. Now you give Frank a call…Ugh…Voicemail. You leave Frank a message. No big deal we only wasted another 3 minutes there. As you get up to grab some coffee Frank calls back and has to leave you a Voicemail "Phone tag, you’re it!!!" It takes another 5 minutes for you to call Frank back. He’s at his desk now and you discuss the agenda with him for about 10 minutes. You just spent more than 30 minutes trying to discuss this stupid agenda and you still have two other people to call. Seeing a pattern here? In the corporate world we waste hours upon hours of valuable time swapping email and voicemail messages just to perform simple tasks. So what’s the solution? Well this is where I tell you how cool Real-Time Collaboration (RTC) is.

I think the best way to introduce RTC is by our above example. Pretend for a moment that you have a RTC system in place (Such as Office Communications Server 2007). You know you have to get this stupid meeting agenda finalized today. You sit down at your desk first thing in the morning and logon to your computer. You instantly check the presence or status of Suzy, Frank, and the other 2 people you need to discuss this agenda with. You notice that Suzy hasn’t signed on so she must not be in the office yet. You notice Frank’s status is "On The Phone". The other two people have a status of available. Well 2 out of 4 ain’t bad. You decide to take a trip to get some coffee. While in the break room you run into Bob and discuss last night’s game for about 5 minutes before you head back to your desk. You sit down and notice Suzy is now in the office and Frank is off the phone. With just a few clicks of the mouse you instantly create an Instant Message session with Suzy, Frank, and the other two folks who are providing input. You send your proposed agenda to everyone in the session via file transfer in your Office Communicator Client. After a quick review Suzy wants to make 1 change. Everyone else thinks the agenda looks great. As Frank and the other folks drop out of the IM session, you decide you need to talk to Suzy on the phone. You want to discuss this change a little more in-depth. You click on Suzy’s name and click the call button. Suzy instantly answers and you finish up the conversation in about 5 minutes. Just as you close the call window, you check your watch and notice it only took a total of 25 minutes to finalize the meeting agenda. You now head back over to the break room to finish your highly important conversation with Bob about last night’s big game.

So if you find yourself with a cluttered inbox and wasting countless hours on the phone, I recommend checking out Microsoft’s RTC solution - Office Communications Server 2007.

By Dustin Hannifin

Few days back I was analyzing one requirement in Quickplace where my user wish to get all the attachments from his entire Quickplace.When I saw the Quickplace ,I found its having more than 20 notes(.nsf) instances and size was more than 5GB. Initially I have thought it would be simple agent which will read all the instances in specific folder and will extract on the server's specific drive.By keeping this in mind I have wrote the agent in jiffy.

Code was something like ,

Mkdir "D:\QP Backup\XXXXX\"& strname
tmppath="D:\QP Backup\XXXXX\"& strname & "\"
While Not vwEntry Is Nothing

Set doc=vwEntry.Document
If doc.HasItem("PageBody") Then
Set rtitem = doc.GetFirstItem( "PageBody" )
If doc.Hasitem("$file") Then
Forall o In rtitem.EmbeddedObjects
If ( o.Type = EMBED_ATTACHMENT ) Then
fileCount = fileCount + 1
Call object.ExtractFile(tmppath & Cstr(filename)))
End If
End Forall
End If
End If

Set vwEntry=vwEntryColl.GetNextEntry(vwEntry)

I had started running the code with smiley face and got surprise when it had thrown the error saying "Type mismatch".I had checked in debugger and found few documents having attachment in the document but not in RT field.So, following line started giving problem,

Forall o In rtitem.EmbeddedObjects

(Since, rtitem was empty)

I had opened the lotus help file and started digging around and found if attachment is not in RT field , go for "EmbeddedObjects" method in NotesDocument class.I had modified my code and tested again , Again it was same error at same line .

After spending good time in small issue, I found solid idea to work with this issue.
So, My new code was something like ,

Set vwEntry=vwEntryColl.GetFirstEntry

Mkdir "D:\QP Backup\XXXXX\"& strname
tmppath="D:\QP Backup\XXXXX\"& strname & "\"
While Not vwEntry Is Nothing

Set doc=vwEntry.Document
If doc.HasItem("PageBody") Then
Set rtitem = doc.GetFirstItem( "PageBody" )
If doc.Hasitem("$file") Then
If ( rtitem.Type = RICHTEXT ) Then
For icount=0 To Ubound(macro)
Set object = doc.GetAttachment( macro(icount) )
fileCount = fileCount + 1
Call object.ExtractFile(tmppath & Cstr(macro(icount)))
End If
End If
End If

Set vwEntry=vwEntryColl.GetNextEntry(vwEntry)

pretty simple , huh :)

I had tested with all the possible ways and found it will work with both scenarios (field and document level attachment).

I would like to see if someone comes with some new logic .

What is the future of Lotus Notes?

1. Write comments

When faced with something reasonably complex (and therefore prone to bugs and errors), the easiest way to approach the problem is to sit down, really think about what it is you’re trying to achieve, and then write the comments. Keep them short, clear and above all, accurate. When you come around to actually writing code, trying things out, and debugging, keep referring to those original comments. This way you keep to the requirement. I’ve lost count of the number of times I’ve written fiddly stuff over the course of a couple of days, with my code gradually wandering more and more from the point. It’s such an easy trap to fall into, yet equally easy to avoid.

There’s another reason for thinking about your code and writing the comments up front of course. Have you ever hit an impasse with some convoluted agent or similar, and in explaining the problem to a colleague, inadvertently hit upon your solution? I know I have: it’s a known phenomenon we comment upon at work. Well, it turns out that there’s a reason for these frequent bouts of sudden clarity. “Doing” and “expressing” use different parts of one’s brain. So, writing down a “header” for your code before the lines of Java or script come flowing can actually be really useful in the development stage.

So, make sure you keep the comments succinct, accurate, and up-to-date.

2. Test-driven development

Ah yes, a phrase you’ve no doubt heard bandied around. It’s not an especially new idea, but has gained currency through the promotion of so-called “extreme programming” (XP), a process which encourages concepts like pair-programming, iterative software releases, user stories and, above all, an approach based on “test first, code second”. This dove-tails neatly with my tip about using comments, and entails writing the test for your code before you’ve even started on the code itself, if that seems an appropriate way to go.

It sounds odd I know, but let’s think about it: if you can’t visualise how you’re going to test something, then you probably need to re-assess what it is you’re trying to achieve, and set out your code accordingly — chances are you’re not in a position to write anything remotely complex that will function properly unless you can test it.

A second benefit to test-based coding is that your code will be tested at the “micro” level: each component has undergone some testing before it becomes part of the larger picture, which can save a whole lot of time and heart-ache. The more dependencies you introduce, the harder it is to reproduce and pin-point strange little bugs that creep in.

By way of example, recently I wrote a LotusScript routine for document validation. The basic code checked whether specified fields had been populated or not, and collated all omissions into one prompt box for the user at the end of the validation cycle.

Now, what about other forms of validation? How would you work those in to a custom routine? You might want to check that a field contains only alpha-numeric characters, or perhaps you need to validate an email address is correctly formed. Maybe you need to assess typed-in URLs, or ensure that a field is of a certain length.

Pre-tested modular code makes adding this kind of stuff a snap. For each scenario, if you write a test routine, code a routine or function accordingly, and then test it, you can be reasonably comfortable that subsequent additions don’t break anything already there.

3. Time machines

Source / version control is your friend: please use it! It doesn’t have to be sophisticated, just something that works for you and enables you to “roll back” your designs at any time with the minimum amount of effort.

It’s all very well to say that you should just fix one thing at a time when you’re in the coding / testing / debugging phase, but who does that? It’s all too easy to tweak a variable here, a function there, and — blam! — you’ve broken something. Documenting the big changes, and taking frequent back-ups of your code will help, even if that just means exporting LotusScript / Java source files from your databases every now and again. My previous article mentioned maintaining a code library, and this is useful here too: keep snippets of code you’ve created and discarded in a library rather than clogging up production designs with commented-out code and experiments.

Versioning and code management is an accepted practice in other programming disciplines, so why should Notes and Domino be any different? Perhaps the most well-known versioning system is Concurrent Versions System (CVS). I’d love to see a Notes / Domino solution that leverages the power of CVS or Subversion, but suspect anyone would have their work cut out writing such a beast. In the meantime, a number of third party products exist to help with Domino source control, so you may wish to investigate some of these. They range from simple code roll-back to fully-blown server / client version control with design locking and so forth. I recommend you look at these products before the ├╝ber-geek in you decides to code your own CVS / DXL interface-based source control system (or is that just me? Ahem).

4. Code defensively

Think it will never happen? Think again. An oft-quoted rule of thumb in matters XML borrows from the late Jon Postel: “Be liberal in what you accept, and conservative in what you send.”

I extend this to programming in that you should assume nothing. Say for example you have some LotusScript to access a specified NotesDocument object. You might do this:

Dim session As New NotesSession
Dim db As NotesDatabase
Dim vwLU As NotesView
Dim doc As NotesDocument
Dim strKey As String

Set db = session.CurrentDatabase
Set vwLU = db.GetView("MyLookup")
Set doc = vwLU.GetFirstDocument
strKey = doc.GetItemValue("foo")(0)

getAnotherDoc strKey

Looks OK doesn’t it? But there are way too many assumptions going on in here. First of all, the code assumes that the view exists in the database, and we can get a handle on it OK (i.e. it’s not been inadvertently flagged as private, or had ACL security applied to it).

Secondly, the script assumes that the view is populated, so that we can grab the first document in it OK. Again, the view may be empty. It may be an unpopulated folder. There may be data in there, but we don’t have access to it.

Finally, even if we get the document, there’s no guarantee that strKey will get populated, and then all hell will break loose in the getAnotherDoc() sub-routine.

All of these conditions will result in myriad “Object Variable Not Set” errors. And we don’t like them sir, no we don’t.

So, as Pragmatic Programmers say: code defensively: check that variables and object variables have been instantiated, especially if you depend on them further down the line. If something goes wrong, ensure that it fails gracefully. This kind of work is tedious, and means extra typing, but your code is worth that surely?

5. Error handling

All too often decent error trapping and handling constitutes the most over-looked aspect in Notes and Domino projects. Repeat after me: On Error Resume Next is rarely the answer!

Coding proper error handling into one’s application could quite easily form the basis of a whole series of articles. Suffice it to say, you should be looking at your code with a critical eye, and looking out for common “gotchas” as described in tip number four above — these will save a lot of time come testing and debugging:

If you’re not routinely checking for Object Variable Not Set and the like, you should be. One way you could do this is as follows:

On Error Goto lblErrs

Dim session As New NotesSession
Dim db As NotesDatabase
Dim vwLU As NotesView
Dim doc As NotesDocument

Const ERR_NO_VIEW = "A required view could not be found."

Set db = session.CurrentDatabase
Set vwLU = db.GetView("MyLookup")

If vwLU Is Nothing Then
Error 1000, ERR_NO_VIEW
Exit Sub
End If


' // Custom error handler
errHandler Err, Error$, GetThreadInfo(XXX)
Exit Sub

Now, what are we doing here? First of all, the whole If… Then Error… bit is about “throwing” your own error. I prefer this approach to others in most scenarios because it’s a little “cleaner” than handling the error (null object or whatever) within the code itself: I like to parcel everything off to the appropriate place, in this case a custom error handler.

In the code example above, we’re sending three pieces of information to this error handler. Err is the error code thrown by Notes: either our custom one above (“1000”) or a familiar internal one (e.g. “13” for “Type Mismatch”). Error$ is the error message generated — again, either our one (via the ERR_NO_VIEW constant) or Notes’ own. Thirdly, I’m using something that doesn’t seem to get used much: GetThreadInfo, which can provide some handy detail for tracing errors. The XXX element is nonsense in the example above, you should replace this with the appropriate constant to get the piece(s) of information you require:

Integer constantMeaning
LSI_THREAD_LINECurrent line number
LSI_THREAD_PROCName of current procedure
LSI_THREAD_MODULEName of current module
LSI_THREAD_VERSIONLotusScript version

(Other constants are available: these are the main ones I use. Note also that to use these constants you need to include lsconst.lss in your script).

Like I say, error handling is a whole topic in itself. There are some excellent articles out there that you may wish to delve into.

6. Object oriented code

Clearly this only applies to any Java and LotusScript you may have in your application, but it bears detailing, especially when considered with my earlier tip regarding test-driven development.

Nowadays I often find myself placing script in custom classes rather than having lots of subs and functions everywhere. This isn’t suited to everything of course, but by way of example, the custom validation code touched-upon above was implemented as a class in LotusScript. The class, with its “constructor” (Sub New in LotusScript) performed the core validation, whilst additional public sub-routines and functions allowed the checking of specific fields with regards length, invalid characters, and so on. Private routines and functions are also used within the class to perform “housekeeping” — tracking the number of errors hit, collating error messages, and so forth.

This approach has a couple of benefits for the developer:

1. The main body of the code is all “tucked away” in the class, so agents and things like Querysave can be kept clutter-free.
2. Once the basic class has been tested, the developer can add new sub-routines without worrying too much about breaking anything. For example, my validation class tracked the errors arising and which field in the user interface to take the user to once validation had failed. So, when I added a new method to check for invalid characters, the basic error-handling and reporting to the user was already in place and tested. This principle extends to sub-classing rather than simply adding code to the base class — even better.

Smashing! Now, when this approach should be used is up to you: it’s not always appropriate and may well be over-kill. In fact, I ought to touch on the down-side of custom classes in Notes and Domino:

1. Notes has a 64 Kb limit on code in a single script library event: bear this in mind when designing elaborate object models
2. omino Designer is not the greatest integrated design environment (IDE) for object oriented programming.

To summarise therefore, consider their use carefully, but classes can lead to some pretty quick code / test cycles, and that can’t be bad.

7. Consider testing within your actual application

What do I mean by this? Well, some people out there know I’m a Mac-head (I know, and a Notes developer. What can I say? I’m odd), and as a result I use a web browser called Safari. With a simple tweak, Safari is able to display a “Debug” menu thus:

As you can see, this menu lets the developer or tester do all kinds of things to the application, testing and tracking a variety of scenarios.

This led me to thinking — and the idea isn’t at all original, but there you go — what about implementing a similar thing in a Notes application? Here are some suggestions for this kind of thing:

Use a “Developer” access control list (ACL) role

This could be used to permit access to otherwise “hidden” features and options in outlines, form design, action menus, and so forth. In a complex workflow application you might want to reveal all the hidden fields in a form to developers for trouble-shooting.

Testing aids

Consider adding some agents, or similar code tweaks, to your databases which help you out. Typical applications include changing fields on the fly for trouble-shooting and re-setting other data for test purposes.

At the simplest level, changing fields comprises some @formula in an agent or toolbar icon which prompts the developer / tester with a list of fields in the currently-selected document, allowing them to then change any field, in terms of both data type and underlying value(s). You can find some example code by Chad Schelfhout
in the LDD Sandbox. My suggestion is that you add this code to a smarticon, and use it forever more!

Other test aids could include scripts which generate large numbers of documents or fully-fledged code to benchmark more complex processes using things like the NotesTimer class.

Indeed, it’s possible to develop a pretty sophisticated suite of test tools, as Nik Shenoy can attest. Nik has developed a beta “LSUnit” script library, taking the ideas behind tools such as the open source Java-based JUnit library and applying them to LotusScript.

The news isn’t all good though. Whilst Notes 6.x makes testing like this fairly painless — because you can compile all LotusScript from one menu command –releases prior to 6 don’t allow this. This is a pain when you make a change to a class tested in an agent, especially if that class is nested in a hierarchy of script libraries.


Logging is great. You may only wish to log complex scheduled agents in your final production system, but logging just about everything in your other agents can really help during the development phase. There are a few solutions out there, including LotusScript’s very own NotesLog class, and all are up to the job.

The best solution is one that allows error and event logging, with a choice of destination for that log: email, another database, text file, etc. Most solutions offer this, so it comes down to personal preference — and whether you require logging for both LotusScript and Java.

Combining your error logging with the aforementioned GetThreadInfo function results in a powerful debugging solution: you’re effectively getting a “stack trace” which helps you pinpoint where the error’s occurring. In anything remotely involved, this is a God-send.

Must read Article on related subject:

Debugging LotusScript by Andre Guirard

Lotusscript Error Tracing by FERDY CHRISTANT

Few days back I was using the Google mail , I observed most of the commercial mail sites are hiding "CC" and "Bcc" field in memo form. I love the way as Google has done.So, I felt to play with my notes mail box.I have done the things in same way which google mail gives.See below :

Here are the steps to achieve the same :

1/ Add two custom fields in your "Memo" form, Say "c_Cc" and "c_Bcc".Hide it permanently.
2/ Keep default value for both the field as "1".
3/ Create one row just below to "To" field and Create two hot spots "Add Cc" and "Add Bcc" . Shown below :

4/ Put the following formula for "Add Cc" hotspot :

FIELD c_Cc:="0";
FIELD c_Cc:="1";

5/ Put the following formula for "Add Bcc" hotspot :

FIELD c_Bcc:="0";
FIELD c_Bcc:="1";

6/ Add c_Cc="1" in hide when formula for "EnterCopyTo" field and whole table row.

7/ Add c_Bcc="1" in hide when formula for "EnterBlindCopyTo" field and whole table row.

8/ Add a row below to the Subject and create Hot spot as "Attach a File".

9/ Put the following formula for hot spot :


Thts it .!!!

Agent Profiling is a new feature in R7 which can help to analyse,troubleshoot and monitor the lotus script code.Profile the agent help us more accurately target the performance problems.

To profile an agent, the agent profiling has to be turned on for that particular agent. This setting is on the second tab of the Agent Properties box.

After the profiling toggle is turned on, the next time the agent runs it will be profiled. Agents can be profiled regardless of how they run (for example, as a scheduled agent, as a Web agent, or manually from the Action menu). The profiling information is stored in a Profile document in the database associated with the agent.

To view profiling information, select the agent you are profiling in Domino Designer, and then choose Agent - View Profile Results

you see the name of the agent and the time stamp of when profiling was done. Elapsed time is the total amount of time the agent ran, followed by a total measured time, which is typically somewhat smaller because time values are rounded down for display purposes. For example, the values under one millisecond are displayed as zeros in the following table. The profiling table contains the class, the method, the operation, the total number of calls to that method, and the total amount of time spent on all calls to that method. The information in the table is sorted in descending order, showing the methods where the most amount of time was spent at the top.

Profiling results are presented in a document based on the hidden form $BEProfile. The result document contains a heading listing the name of the agent and the creation time. The Body item of the result document contains a table with a row for each Domino Objects method called and five columns:

  • Class -- The name of a Domino Objects class using normalized names such as Session, Database, and Document.
  • Method -- The name of a Domino Objects method or property using normalized names such as CurrentDatabase, AppendItemValue, and Save.
  • Operation -- For properties, the type of operation: Get or Set.
  • Calls -- The number of times the method or property was called.
  • Time -- The amount of time the calls consumed in milliseconds. The symbol "<" means not enough time to calculate.

One of the handy option Quickplace provides to view Office documents as Inline attachment.It can be achieved by selecting following options while creating a new page.

Quickplace opens Office documents as a inline attachment.It is easy and time saving for end users to access those attachments.




But,Sometimes user comes with the requirement saying I wish to view pdf in a same way as office documents.Here are the code and steps to achieve the same.

  • Create a new form (Customize-> Forms ->Imported HTML form).

  • Create html file with following code

    (HTML code is not allowed here,Drop me a mail for HTML file)

  • Select newly created html file,name as "Inline Pdf" and publish the page.

  • Create new page with the "Inline pdf" option.

    Here is a result

This article introduces pre-delivery mail Agents that can process incoming mail before it reaches a user's mail box. These Agents can file new messages in folders other than the Inbox, remove large attachments to conserve disk space, or even delete it.

Notes R5 includes new pre-delivery mail agents that you can set up to run before new mail arrives. This means that you can do things like:

  • File newly delivered messages into a folder other than the Inbox when those messages meet certain conditions. For example, you may want to file bug reports into a Bugs folder. Because the agent runs when the message is delivered, the user does not see the message appear in the Inbox and then move to another folder.
  • Remove large attachments from messages before delivering those messages to the user's mail box. The agent removes the attachment before the entire message writes to the user's mail box; thus, you can conserve disk space and eliminate the need to replicate extra data.
  • Determine (based on something about the incoming message) that the message should not be delivered and delete it. For example, you may want to delete messages from a known "Spam" user. Again, you can save both your disk and replication resources.

This article will talk more about pre-delivery mail agents, and how you can use them to perform these operations. We will cover the agent's behavior, the restrictions imposed on this type of agent, and the situations where you should use the agents, versus when you should use the post-delivery mail agents. Then, we'll look at how to debug the agents, and go over some specific examples. You'll see how these agents can be useful for efficiency and storage capacity, as well as usability.

Note: Prior releases of Domino and Notes are not aware of pre-delivery mail agents and the new trigger type "Before new mail arrives." If an R5 server that has pre-delivery agents enabled fails over to a pre-R5 server, the router will deliver the mail without executing the pre-delivery agent. If you create a pre-delivery agent on a Notes R5 client and then attempt to edit or turn it off from a pre-R5 client, you will receive an error that the agent has an unknown trigger.

An introduction to pre-delivery mail agents

Mail agents in R5 now come in two types of flavors according to when they run: before new mail arrives or after new mail arrives. The new pre-delivery mail agents are identified by the new agent trigger "Before new mail arrives." The old-style R4.x mail agents are still supported in R5, and are now identified by the agent trigger "After new mail arrives." We will refer to these old-style agents as post-delivery mail agents. The following screen shows the new mail agent triggers:

Figure 1. New agent triggers

The pre-delivery mail agents process mail before it arrives in the user's mail database; for example, to move incoming mail to a folder. The agents are run by the mail router, so they are guaranteed to run before a new mail message deposits into the user's mail file. In contrast, post-delivery agents run after a mail message arrives in the user's mail database. So, you can use post-delivery agents for operations that do not depend on the timing of mail delivery; for example, to respond to mail messages.

Although you can use both types of mail agents in the same mail database, you can enable only one pre-delivery mail agent at a time. You can have an unlimited number of disabled pre-delivery mail agents. In addition, you can have an unlimited number of enabled and disabled post-delivery agents. The following screen shows the agent view of a mail database containing both pre-delivery and post-delivery mail agents. Notice that the blue arrow below the checkbox identifies the currently active (enabled) pre-delivery agent:

Figure 2. Agent list with enabled arrow

The order of execution

If you enable both pre-delivery and post-delivery mail agents in a database, the pre-delivery agent always execute first (before the mail message is deposited into the database). Then, the post-delivery agents execute (after the mail message has been deposited into the database). The post-delivery agents are triggered by the new message, even if the pre-delivery agent filed it into a folder other than the Inbox. If the pre-delivery mail agent deletes a mail message, the message is never delivered to the database. This means that the post-delivery agents are not invoked for this particular message.

In addition to mail agents, R5 provides two other mail filtering options: new router controls and mail template rules. You can use the built-in router controls to allow mail only from designated domains, allow mail from designated organizations, deny mail from designated organizations, and to impose size restrictions on incoming mail. With the mail template rules, you can specify to watch for messages from a certain sender (such as your boss), or messages that contain a certain subject, and then select what to do when those messages arrive (copy or move them to a folder, delete them, or change the importance of the message).

So, if you use all of the mail filtering options in R5, the options execute in the following order:

  1. The router controls verify that a message is allowed to route through the domain.
  2. The pre-delivery mail agent processes the incoming message.
  3. The action specified in the mail template rules occurs.
  4. The post-delivery mail agents process the message.

Configuration settings

Because pre-delivery mail agents are run by the router, Agent Manager settings have no effect on these agents. Therefore, these agents have separate configuration settings. First, the pre-delivery mail agents have a separate maximum execution time, specified in the "Pre-delivery agent timeout" field in the "Router/SMTP" /Restrictions and Controls/Delivery Controls tab of a server's Configuration Settings document. (shown below) By default, this setting is 30 seconds. (LotusScript and Java agents that run by all other triggers use the execution time specified in the Server Tasks/Agent Manager tab of the Server document.)

Figure 3. Configuration Settings document

If the pre-delivery agent exceeds the maximum timeout, it aborts in the middle of its execution. The terminate event fires, and thus gives you, the agent writer, a chance to do some minimum amount of work to clean up. If you write the agent in LotusScript, any files that the agent opened close automatically. If you write the agent in Java, the JVM (Java Virtual Machine) schedules all objects used in the agent for garbage collection. Although it may not happen immediately, the JVM closes any open files when the corresponding objects are collected.

In addition, you can control the number of concurrent pre-delivery mail agents that can execute by configuring the maximum delivery threads for the router, specified in the "Router/SMTP" /Restrictions and Controls/Delivery Controls tab of a server's Configuration Settings document. The number of delivery threads can vary from 3 to 25. By default, Domino determines the number of delivery threads based on the size and performance characteristics of the server.

Designing with pre-delivery mail agents

When designing your mail processing application, you can put both pre-delivery and post-delivery mail agents to work for you. If possible, try to separate your processing into whatever is "critical" to do before a message reaches the user's mail box, and whatever can occur after the message reaches the user's mail box. For example, you might want to have a pre-delivery agent strip off large attachments, and set a flag when it performs the operation. Then, the post-delivery agent can notify the mail sender that the message contained an attachment that was too large to deliver.

When you begin designing your pre-delivery mail agents, you'll need to keep the following things in mind:

  1. Pre-delivery agents have built-in restrictions to ensure their efficiency
  2. There are some key coding differences for pre-delivery and post-delivery mail agents
  3. Pre-delivery agents can run automatically when servers failover
  4. Folder operations work differently for pre-delivery agents
  5. Debugging pre-delivery mail agents is different than other agents

The next sections describe each of these areas in more detail.

Built-in restrictions for pre-delivery mail agents

Because the router executes the pre-delivery mail agents, it's vital for these agents to be as short and efficient as possible. Otherwise, they may slow down the router, which would be highly undesirable for server performance. So, to help ensure the efficiency of the pre-delivery agents, we placed the following restrictions on these types of agents:

  • As mentioned earlier, you can enable only one pre-delivery agent per database. This allows Domino to retrieve an agent in an efficient manner for any number of users.
  • You cannot set up pre-delivery mail agents to call other agents. This facilitates an efficient caching scheme.
  • You cannot set up pre-delivery mail agents to modify attachments. The agents can only examine attachments and detach them. Currently, there are no methods that can modify attachments; you can only modify them by using an OLE operation. Therefore, this restriction means that the methods Activate and DoVerb on NotesEmbeddedObject are not allowed.
  • As mentioned earlier, the maximum execution time for pre-delivery agents is separate from other agents, and we recommend you keep the setting at a much smaller number than for other agents. By default, the setting is 30 seconds.

Coding differences for pre-delivery and post-delivery mail agents

Remember that pre-delivery agents execute before the new mail message is written to the user's mail box. This means that as an agent writer, you will not find this new document in any collection obtained from the user's mail database. The new document is available only through the document context. In general, you will need to slightly modify the logic that you used in R4.x mail agents to work with the new mail trigger (that is, you will need to change how you obtain the document that is being delivered).

Another difference between post-delivery (R4.x) mail agents and the new pre-delivery agents is that with the new agent, you know that it will operate on only one document -- the new mail message that the router is delivering. In contrast, a post-delivery mail agent can operate on any number of documents, because the agent runs on all new documents that arrived in the user's mail database since the last time it ran.

Agent failover for pre-delivery mail agents

The router executes pre-delivery mail agents on the same server where it delivers mail. If mail delivery fails over to another server in a cluster, the agent execution fails over as well (provided the second server is also an R5 server). That is, the router on the second server automatically runs the pre-delivery mail agent on any mail it delivers.

In contrast, post-delivery agents are designed by default to run on the home mail server of the agent signer (the person who last modified the agent). Before the agents run, they perform a check to see if the current server is the home mail server of the agent signer. The home mail server is determined by taking the name of the agent signer, performing a lookup in the Domino Directory, and retrieving the mail server from the user's Person document. If the home mail server is not the same as the server on which the agent is attempting to run, the agent does not run. This means that the post-delivery agent does not run unless it resides on the user's mail server. Mail-in databases also should reside on the same server as the mail server listed in the user's Person document. You can change this default behavior by setting the NOTES.INI AMgr_DisableMailLookup variable to 1. This setting suppresses the check for the user's home mail server and allows the agent to run on any server. Be aware that depending on the logic of your agent, allowing the agent to run on any server may cause replication conflicts. (This Agent Manager setting has no effect for pre-delivery mail agents, because the router runs those agents.)

Managing folders in pre-delivery mail agents

Pre-delivery mail agents operate on a message while it's in transit -- that is, before the router delivers the message to the user's mail box. This results in the following special behavior in folder manipulation operations:

  • When the router delivers the message to the user's mail database, only one (the first) PutInFolder operation takes effect. If your agent uses LotusScript or Java, the following runtime error appears if more than one PutInFolder operation is detected: "Invalid sequence of operations in mail pre-delivery agent." If you are using a simple agent and have multiple move operations, the Agent Log generates a warning notifying you which folder operations have been ignored.
    Because only one PutInFolder operation can take effect, any newly delivered documents can appear in a maximum of two folders (Inbox and any other folder), but in any number of views at the same time. Note that views and folders are two different things. Membership in a view is determined by the selection formula for the view (All Documents selects all documents). For example, you can create a view called "All Mail From Julie" that selects all documents that have "Julie" in the From field. You can define any number of such views, and one document can match any number of these criteria so that it can appear in any number of views. Folders, on the other hand, are populated explicitly. By default, the router inserts a document into the Inbox folder on delivery. The agent can decide not to do that, and can also decide to put the document into another folder. So, when you use a pre-delivery mail agent, the document can appear in zero, one, or two folders, but in any number of views.

  • If the pre-delivery mail agent removes the message from the Inbox via the operation RemoveFromFolder("($InBox)"), the mail message is delivered, but only appears in the All Documents view.
  • If you want a new mail document to appear in only a different folder and not in the Inbox, the agent needs to perform two folder operations: PutInFolder("NewFolder") and RemoveFromFolder("($InBox)").

The following table summarizes the sequences and results of folder methods.

Figure 4. Folder operations table

Debugging pre-delivery mail agents

Because the pre-delivery mail agent operates on the message being delivered, you cannot test or run the logic of this agent effectively as a manual agent. If you do, you will receive the error "No documents have been selected" since the context is not set properly.

However, you have the following options for debugging pre-delivery mail agents:

  • You can change the agent trigger to another type, such as "Selected documents," and then verify that the agent logic works correctly when the agent context is the document selected in a view. When you are satisfied with the logic, switch to the "Before mail arrives" trigger.
  • You can debug the logic of the agent by writing debug information to the Agent Log.
  • You can use MessageBox or Print methods to print debug information to the server console and server log.

Since the router executes pre-delivery mail agents, the Agent Manager NOTES.INI settings for logging and debugging have no effect. To control the errors that log to the server console for pre-delivery agents, you can specify the "Logging level" setting for the router in the "Router/SMTP" /Advanced/Controls tab of a server's Configuration Settings document. You can set the Logging level to Minimal, Normal (the default), Informational, or Verbose. If you set this variable to Verbose, your agent errors log to the console. Note that you will also get a very verbose output of the router operations. By using this separate setting to control the output of pre-delivery mail agents, you can tune the performance of the router, while still having your other agents generate as much output as needed for your other operations.

You will notice that the output from the pre-delivery agent on the server console is prefixed by "Addin:", as shown in the following:

"Addin: Agent message box: Message generated by mail pre-delivery agent."
This happens because the messages are generated by the router task (which is executing Agent Manager routines via API calls) and the router task is a server add-in. The same prefix appears on the error messages generated by the agent to the server console.

Examples of pre-delivery mail agents

Now let's put everything we learned about the pre-delivery mail agents into action. We will go over the following examples of some typical operations you can perform with the pre-delivery agents:

  • Filing a message into a folder other than Inbox based on the subject of the message
  • Detaching attachments if they are larger than our threshold
  • Deleting a message and forwarding a copy to another person
  • Splitting a task into critical and non-critical portions, and using both types of mail agents to perform the task

Please note that the sample agents include a lot of debugging code for illustration purposes. To write these agents for a production system, you should remove all debug code (including the writing to the NotesLog, print, and MessageBox statements) after you complete your testing to make the agents as small and as efficient as possible.

Example One: Filing a message

This LotusScript agent files a message into a folder other than Inbox if the subject of the message is "Vacation request."

Sub Initialize
Dim session As New NotesSession
Dim note As NotesDocument
Dim dbug As NotesLog
Dim db As NotesDatabase
Dim it As NotesItem
Set session = New NotesSession
Set sourcedb = session.CurrentDatabase
REM Log steps in our processing for debug purposes
Set dbug = New NotesLog("Router log")
dbug.LogActions = True
Set db = session.CurrentDatabase
REM Make sure we have the note set correctly
If db Is Nothing Then dbug.LogAction("db is not set") Else dbug.LogAction("db is set")
Set note = session.DocumentContext
If note Is Nothing Then dbug.LogAction("note is not set") Else dbug.LogAction("note is set")
REM Note the Subject of all messages
dbug.LogAction("Subject ->" + note.Subject(0))
REM Is this message has the special subject, store it in the special folder
If note.Subject(0) = "vacation request" Then
Call note.PutInFolder( "Vacation" )
REM PutInFolder leaves a message in the Inbox view as well.
REM Since we want to have it only the Vacation Folder we need to remove it from Inbox
Call note.RemoveFromFolder("($InBox)")
dbug.LogAction("File into Vacation Folder")
End If
End Sub

Example Two: Detaching attachments

This LotusScript agent detaches attachments that are bigger than a certain size (MaxSize).

Sub Initialize
Dim session As New NotesSession
Dim db As NotesDatabase
Dim doc As NotesDocument
Dim dbug As NotesLog
Dim rtitem As Variant
Dim fileCount As Integer
Dim it As NotesItem
REM Specify the size limit for attachments
Const MaxSize = 5000
fileCount = 0
Set dbug = New NotesLog("Router log")
dbug.LogActions = True
REM get the incoming mail message
Set doc = session.documentcontext
REM Log the subject name of the message for debug purposes
Set it = doc.GetFirstItem("Subject")
dbug.LogAction("doc subject from context" + "-> " + it.Text)
Set rtitem = doc.GetFirstItem( "Body" )
If ( rtitem.Type = RICHTEXT ) Then
Forall o In rtitem.EmbeddedObjects
REM Note how many files we have processed
fileCount = fileCount + 1
dbug.LogAction("file count:"+Cstr(fileCount))
If ( o.Type = EMBED_ATTACHMENT ) And ( o.FileSize > MaxSize ) Then
Call o.ExtractFile( "c:\tmp\newfile" & Cstr( fileCount ) )
Call o.Remove
REM Note that we removed an attachment
dbug.LogAction("attachment removed")
REM Created a field noting that we removed an attachment
doc.stripped = "yes"
Call doc.Save( True, True )
End If
End Forall
End If
REM Finish up agent log processing
dbug.LogAction("Mail preprocessing agent is done")
End Sub

Example Three: Deleting messages

This LotusScript agent deletes messages that come from "Joe Spam" after forwarding them to the postmaster.

Sub Initialize
Dim session As New NotesSession
Dim note As NotesDocument
Dim dbug As NotesLog
Dim db As NotesDatabase
Dim it As NotesItem
Set session = New NotesSession
Set sourcedb = session.CurrentDatabase
Set dbug = New NotesLog("Router log")
dbug.LogActions = True
Set db = session.CurrentDatabase
Set note = session.DocumentContext
dbug.LogAction("Subject ->" + note.From(0))
If note.From(0) = "CN=Joe Spam/O=SpamFactory" Then
Call note.Send(False, "Administrator/Lily")
dbug.LogAction("Send memo")
Call note.Remove(True)
End If
End Sub

Example Four: Using both pre-delivery and post-delivery agents

This example shows how to perform only critical mail processing in the pre-delivery agent, and all other processing in the post-delivery agent. Our overall goal is to strip any large attachments from incoming mail and to notify the mail senders that their attachments weren't delivered due to their size. The pre-delivery agent strips off large attachments, since this reduces the overhead and disk space. The post-delivery agent notifies the mail sender, since this part of the task is not time critical.

For the first part of the task, we will use the pre-delivery agent created in Example Two. Notice in that agent that when we remove an attachment, we set the status of a field "stripped" to "yes." In the post-delivery agent, we will check that field and if it was set, we will generate a reply to the sender.

Sub Initialize
Dim session As New NotesSession
Dim db As NotesDatabase
Set session = New NotesSession
Set db = session.CurrentDatabase
Set docs = db.UnprocessedDocuments
Count = docs.Count
REM if we have new mail start processing
If docs.Count > 0 Then
For n = 1 To docs.Count
Set memo = docs.GetNthDocument(n)
If Not( memo.SentByAgent ) Then
REM if attachemetns were stripped, send a reply
If (memo.stripped(0) = "yes") Then
Set reply = memo.CreateReplyMessage( False )
reply.Subject = "Re: " & memo.Subject( 0 )
reply.Body = "The message you mailed contained attachments that were too large.
They were removed before mail delivery."
Call reply.Send( False )
End If
End If
Call session.UpdateProcessedDoc(memo)
End If
End Sub

Some important points to note before using a "Before new mail arrives" triggered agents in applications (Mostly in mail in database apps or customized mail boxes). These agents are different from normal agents which are scheduled or event triggered. The differences are

1. only one agent is allowed per database.

2. This agent cannot call another agent (Cascading is not possible).

3. The maximum execution time of this agent is NOT controled by the agent execution time parameter in the server document.

4. This agent execution time is part of router setting in the server document as these agents are executed by the router not by the agent manager.

5. The default time for maximum execution is 30 seconds. The parameter can be changed (mail predelivery agent execution timelimit parameter in the router/smtp settings.) but not advisable as it will adversly impact the router performance.

No Lotus Notes/Domino developer wants to hear the following comment: "Beautiful application, too bad it's so slow!" In this two-part article series, we explain how you can avoid this embarrassment by building Notes/Domino applications optimized for performance.

One of the saddest sights we know is a beautiful application that is so slow it's unusable -- all the long hours and hard work wasted because users are frustrated by slow response times. Over the past 12 years, we've spent a lot of time researching and testing Domino applications and functionality to understand how and where features can best be used to optimize performance. We started supporting and developing Domino applications in the early 1990s, and quickly became fascinated with application performance. It seemed to us then (as it still does today) that much of what we perceive as server performance problems are actually application performance problems. And the solutions, therefore, are often found within the application rather than on the server.

In this two-part article series, we will share some of what we've learned with you. This series covers three areas of application performance: database properties, document collections, and views. In part 1, we will discuss database properties and document collections. In each case, we will point out areas that are most significant and provide concise, real-world examples to help you understand what to do in your own applications. We'll use examples from many applications; you'll probably find that at least one of them closely matches something that you do or that you use. Our goal is to help you build applications that are as fast as they are beautiful.

This article assumes that you're an experienced Notes/Domino application developer.

Database properties

There are a handful of database properties that are relevant to the performance of your application.

Don't maintain unread marks

If you check this box, unread marks will not be tracked in your application regardless of the settings you have for each view. We used client_clock to track the time spent opening a database, and what we saw surprised us. For a large application (say 20 GB with 200,000 documents), our Notes client could open the database in about five seconds without unread marks, including network traffic. With unread marks turned on, we had to wait an additional six seconds or more. This additional time was spent in GET_UNREAD_NOTE_TABLE and RCV_UNREAD. With unread marks turned off, these calls aren't made.

In a smaller database (less than 1 GB), we saw savings of maybe 0.5 seconds with unread marks turned off. Of course, it was faster to open that database with or without unread marks compared to the larger database. So you should consider whether or not your application needs the unread marks feature before you roll it out into production.

Optimize document table map

This feature has not changed for the past few releases of Lotus Notes/Domino. This feature is designed to speed up view indexing for applications with structures that resemble the Domino Directory. (In other words, they contain many documents that use one form and a small number of documents using a different form. Think of Person documents versus Server documents in the Domino Directory.)

The idea is that, instead of checking every document note to see whether or not it should be included in a view index, we make two passes. The first pass merely checks to see if the correct form name is associated with that document. The second pass, if needed, checks for the various other conditions that must be met to include this document note in the view index.

Note: Currently, this feature does not appear to improve indexing times, not even for Domino Directories.

Don't overwrite free space

This feature has not changed for the past few releases of Lotus Notes/Domino. If you uncheck this box, then whenever documents are deleted, Lotus Notes will actually overwrite the bits of data instead of merely deleting the pointer to that data. The goal is to make the data unrecoverable. You would only use this feature if you feared for the physical safety of your hard disk. For virtually every application, this extra physical security is unwarranted and is merely an extra step when data is deleted.

Maintain LastAccessed property

This feature has not changed for the past few releases of Lotus Notes/Domino. If you check this box, Notes will track the last time a Notes client opened each document in the database. Lotus Notes always tracks the last save, of course, in the $UpdatedBy field, but this feature tracks the last read as well. (It does not track Web browser reads, however.)

We have not seen this feature used by developers other than in knowledge base applications, where data is archived if it has not been read within a certain number of months or years.

Document collections

We've looked at customer code for many years in agents, in views, in form field formulas, and so on. In our experience, frontend performance problems tend to be more troublesome than backend performance problems for a number of reasons:

1-backend processes are typically monitored more rigorously.
2-backend processes frequently do not have to worry about network traffic.
3-frontend problems can be confusing to decipher. Users often are not sure which actions are relevant, causing them to report unimportant and even unrelated actions to your Support desk.

But regardless of where the code comes from, if we find that something is slow and we open up the code to examine it, we will likely find the following as a common denominator:

1- The code establishes certain criteria from context, such as user's name, status of document that user is in, today's date, and so on.
2- The code gets a collection of documents from this or another database.
3- The code reads from, and/or writes to, these documents.

From performing tests over many years, we have found that typically, the first step is very fast and not worth optimizing, at least not until bigger battles have been fought and won. The third step is often slow, but unfortunately it is not very elastic. That is, you are unlikely to find that your code is inefficiently reading information from or saving information to a set of documents. For example, if you are trying to save today's date to a field called DateToday, you would likely use one of the following methods:

Extended class

Set Doc = dc.getfirstdocument
Do while not (Doc is Nothing)
Doc.DateToday = Today
Call Doc.Save
Set Doc = dc.getnextdocument ( Doc )

Set Doc = dc.getfirstdocument
Do while not (Doc is Nothing)
Call Doc.ReplaceItemValue ("DateToday", Today)
Call Doc.Save
Set Doc = dc.getnextdocument (Doc)


Call dc.StampAll ("TodayDate", Today)
In our testing, we've never found a difference in performance between the first two of the three preceding examples. Using the extended class syntax, doc.DateToday = Today, appears to be just as fast as using doc.ReplaceItemValue ("DateToday", Today). In theory, we should see some performance difference because in one case, we are not explicitly telling Lotus Notes that we will update a field item, so Lotus Notes should spend a bit longer figuring out that DateToday is, in fact, a field. However, practical tests show no difference.

The dc.StampAll method is faster if you are updating many documents with a single value as in the preceding example. There were some point releases in which a bug made this method much slower, so if you're not using the latest and greatest, please confirm this is working optimally (either with testing or by checking the fix list). But as of Lotus Notes/Domino 6.5 and 7, this is once again fast. However, there are often so many checks to perform against the data or variable data to write to the documents that dc.StampAll is not always a viable option. We would put it into the category of a valuable piece of information that you may or may not be able to use in a particular application.

As for deciding which of the three methods we should focus on, our experience says that the ReplaceItemValue example (getting a collection of documents) is the one. It turns out that this is often, by far, the largest chunk of time used by code and fortunately, the most compressible. This was the focus of oiur testing and will be discussed in the remainder of this section.


Our testing methodology was to create a large database with documents of fairly consistent size (roughly 2K) and with the same number of fields (approximately 200). We made sure that the documents had some carefully programmed differences, so that we could perform lookups against any number of documents. Specifically, we made sure that we could do a lookup against 1, 2, 3, … 9, 10 documents; and also 20, 30, 40, … 90, 100; and also 200, 300, 400, … 900, 1000; and so on. This gave us a tremendous number of data points and allowed us to verify that we were not seeing good performance across only a narrow band. For example, is an excellent performer against a large subset of documents in a database, but a poor performer against a small subset. Without carefully testing against the entire spectrum, we might have been misled as to its performance characteristics.

We ran tests for many hours at a time, writing out the results to text files which we would then import into spreadsheets and presentation programs for the purpose of charting XY plots. After many such iterations and after trying databases that were small (10K documents) and large (4 million documents), we came up with a set of guidelines that we think are helpful to the application developer.

Which methods are the fastest?

The fastest way to get a collection of documents for reading or writing is to use either db.ftsearch or view.GetAllDocumentsByKey. It turns out that other methods (see the following list) may be close for some sets of documents (discussed later in this article), but nothing else can match these methods for both small and large collections. We list the methods with a brief explanation here and go into more detail later.

1- view.GetAllDocumentsByKey gets a collection of documents based upon a key in a view, then iterates through that collection using set doc = dc.GetNextDocument (doc).
2- db.ftsearch gets a collection of documents based upon full-text search criteria in a database, then iterates through that collection using set doc = dc.GetNextDocument (doc).
3- view.ftsearch gets a collection of documents based upon full-text search criteria, but constrains the results to documents that already appear in a view. It then iterates through the collection using set doc = dc.GetNextDocument (doc).
4- gets a collection of documents based upon a non-full-text search of documents in a database, then iterates through the collection using set doc = dc.GetNextDocument (doc).
5- view.GetAllEntriesByKey gets a collection of view entries in a view, then either reads directly from column values or gets a handle to the backend document through the view entry. It then iterates through the collection using set entry = nvc.GetNextEntry (entry).

If you have a small collection of documents (for example, 10 or so) and a small database (for instance, 10,000 documents), many different methods will yield approximately the same performance, and they'll all be very fast. This is what you would call the trivial case, and unless this code is looping many times (or is used frequently in your application), you might leave this code intact and move on to bigger problems.

However, you still may find small differences, and if you need to get many collections of documents, then even saving a fraction of a second each time your code runs will become meaningful. Additionally, if your application is large (or growing), you'll find the time differences can become substantial.

Here are two customer examples: first, scheduled agents that are set to run very frequently (every few minutes or whenever documents have been saved or modified) that iterate through every new document to get search criteria, and then perform searches based on that criteria. If 10 new documents were being processed, then 10 searches were performed -- and if 100 new documents were processed, then 100 searches were performed. For this customer, if we could shave 0.5 second off the time to get a collection of documents, that savings was really multiplied by 10 or 100, and then multiplied again by the frequency of the execution of the agent. It could easily save many minutes per hour during busy traffic times of the day, which is meaningful. Another case is a principal form has a PostOpen or QuerySave event that runs this code. If you have hundreds of edits per hour (or more), this 0.5 second saved will be multiplied to become a noticeable savings.

Pros and cons of each method

When we explained to colleagues or customers why some of these methods are faster or easier to use than other methods, we often engaged in a spirited debate, complete with "on the one hand" and "on the other hand" arguments. To our great satisfaction, the deeper we've taken these arguments, the clearer the issues have become. We will attempt to invoke that same spirit in this article with two mythical debating opponents, Prometheus ("Pro" to his friends) and his skeptical colleague Connie (a.k.a "Con").

Prometheus: view.GetAllDocumentsByKey looks very fast. I think I'm sold on using it wherever I can.

Connie: All well and good, my friend, but what if you're looking up data in the Domino Directory? You can't get permission to create new views there easily.

Pro: Great point. OK, in applications where I control the lookup database, that's where I'll use this method.

Con: Oh? And if you end up creating an additional 10 views in that database, is it still a good method? Think of all the additional view indexing required.

Pro: That might appear to be a nuisance, but if I build streamlined views, they will likely index in less than 100 milliseconds every 15 minutes when the UPDATE task runs -- more if required by lookups. Surely we can spare a few hundred milliseconds every few minutes?

Con: How do you streamline these views? Is that hard? Will it require much upkeep?

Pro: Not at all. To streamline a lookup view, you first make the selection criteria as refined as possible. This reduces the size of the view index and therefore, the time to update the index and perform your lookup. Then, think about how you'll do your lookups against this view. If you're going to get all documents, consider simply using a single sorted column with a formula of "1." Then it's trivial to get all the documents in the view. If you need many different fields of information, consider making a second column that concatenates those data points into a single lookup. One lookup is much faster than several, even if the total data returned is the same.

Con: OK, I might be sold on that method. But you have also touted db.ftsearch as being very fast, and I'm not sure I'm ready to use that method. It seems like it requires a lot of infrastructure.

Pro: It is true that to use db.ftsearch reliably in your code, you'll need to both maintain a full-text index and also make sure that your Domino server's configuration includes FT_MAX_SEARCH_RESULTS=n, where n is a number larger than the largest collection size your code will need to return. Without it, you are limited to 5,000 documents.

Con: And what happens if the full-text index isn't updated fast enough?

Pro: In that case, your code can include db.UpdateFTIndex to update the index.

Con: My testing indicates that this can be quite time consuming, far outweighing any performance benefits you get from using db.ftsearch in the first place. And what happens if the full-text index hasn't even been created?

Pro: If the database has fewer than 5,000 documents, a temporary full-text index will be created on-the-fly for you.

Con: I have two problems with that. First, a temporary full-text index is very inefficient because it gets dumped after my code runs. Second, 5,000 documents isn't a very high threshold. Sounds like that would only be some mail files in my organization. What if there are more than 5,000 documents in the database?

Pro: In that case, using db.UpdateFTIndex (True) will create a permanent full-text index.

Con: OK, but creating a full-text index for a larger database can be very time consuming. I also know that the full-text index will only be created if the database is local to the code -- that is, on the same server as the code that is executing.

Pro: True enough. Fortunately, Lotus Notes/Domino 7 has some improved console logging as well as the ability to use Domino Domain Monitoring (DDM) to more closely track issues such as using ftsearch methods against databases with no full-text index. Here are a couple of messages you might see on your console log. As you can see, they are pretty clear:

Agent Manager: Full text operations on database 'xyz.nsf' which is not full text indexed. This is extremely inefficient.
mm/dd/yyyy 04:04:34 PM Full Text message: index of 10000 documents exceeds limit (5000), aborting: Maximum allowable documents exceeded for a temporary full text index
Con: While I'm at it, I see that you haven't said much positive about view.ftsearch, view.GetAllEntriesByKey, or And I think I know why. The first two are fast under some conditions, but if the view happens to be structured so that your lookup data is indexed towards the bottom, they can be very slow. And tends to be very inefficient for small document collections.

Pro: All those points are true. However, is very effective at time/date sensitive searches, where you would not want to build a view with time/date formulas and where you might not want to have to maintain a full-text index to use the db.ftsearch method. Also, if you are performing lookups against databases not under your control and if those databases are not already full-text indexed, it is possible that is your only real option for getting a collection of documents.

Here are some charts to help quantify the preceding points made by Pro. These charts show how long it takes to simply get a collection of documents. Nothing is read from these documents and nothing is written back to them. This is a test application in our test environment, so the absolute numbers should be taken with a grain of salt. However, the relationships between the various methods should be consistent with what you would find in your own environment.

In Figure 1, db.ftsearch and view.GetAllDocumentsByKey are virtually indistinguishable from each other, both being the best performers. Call that a tie for first place. A close third would be view.GetAllEntriesByKey, while view.ftsearch starts out performing very well, but then rapidly worsens as the number of documents hits 40 or so.

Figure 1. Document collections, optimized views (up to 100 documents)

In Figure 2, the only difference worth noting from Figure 1 is that looks better and better as the number of documents increases. It turns out that at approximately 5 to 10 percent of the documents in a database, will be just as fast as the front runners. As we saw in Figure 1, view.ftsearch is getting worse and worse as the document collection size increases.

Figure 2. Document collections, optimized views (100 to 1,000 documents)

In Figure 3, the views are no longer optimized to put the results towards the top. That is, if we are getting a collection of only a few documents, then in our test environment, we can try to skew the results by making sure those few documents are towards the top or bottom of the lookup view. In Figures 1 and 2, those documents tended to be towards the top of the view, but in Figure 3, those documents are at the bottom. For three of the methods, this is immaterial (, db.ftsearch, and view.GetAllDocumentsByKey). However, for view.ftsearch and view.GetAllEntriesByKey, this switch is catastrophic in terms of performance. The scale on Figures 2 and 3 had to be changed -- instead of the Y-axis going up to one second, it has to go up to six seconds!

Figure 3. Document collections, non-optimized views


Whenever feasible, use view.GetAllDocumentsByKey to get a collection of documents. In conjunction with this method, streamline your lookup views so that they are as small and efficient as possible. Part 2 of this article series has some tips for doing this.

If your lookups need to go against rich text fields, or if your database is already full-text indexed, db.ftsearch is an excellent performer and well worth considering. Be sure that your results will always be less than 5,000 documents or use the Notes.ini parameter FT_MAX_SEARCH_RESULTS=n (where n is the maximum number of documents that can be returned) to guarantee that you do not lose data integrity due to this limit.

Part 2: Optimizing database views

In "Lotus Notes/Domino 7 Application Performance, Part 1," we examined how you can improve the performance of Lotus Notes/Domino 7 applications through the efficient use of database properties and document collections. In part 2, we explain how you can build high-performing views. As in part 1, this article provides many code snippets that you can re-use and adapt to your own requirements.

Over many years of analyzing application performance issues, we found that views are frequently involved in both the problem and the solution. Often, view indexing is the issue. This article explains how this can happen and what you can do to troubleshoot and resolve this type of problem. But there is another kind of view performance problem that has been popping up more frequently over the past few years. This involves views that display reader access controlled documents. The performance problems seen in these views are often not indexing related, so we’ll take a little time to discuss these separately.

This article assumes that you're an experienced Notes/Domino application developer.

Understanding view indexing (the Update task)

The first thing you have to know before troubleshooting a performance problem that may involve view indexing is how the indexing process works. Indexing is typically done by the Update task, which runs every 15 minutes on the Domino server. Technically, it is possible to tune that interval, but it involves renaming files, so it is rarely done.

When the Update task runs, it looks for every database on the server with a modified date more recent than the last time the Update task ran. Then the task refreshes the views in those databases. Based on our experience, it is reasonable to assume that it takes approximately 100 milliseconds to refresh a normal view in a production database in a production environment.

The logical question to ask is, "What flags a view as needing to be updated?" Every time any of the following occurs a view requires updating:

1- Replication sends a document modification to a database
2- A user saves or deletes a document (and then exits the database)
3- The router delivers a document

The Update task is very liberal in how it determines if a view needs to be updated. For example, imagine that you’re in your mail file and that you change the BCC field of a memo and nothing else. No view displays the contents of that field, so in fact, no view needs to be refreshed. But nevertheless, views that contain this memo will at least be examined simply because the server is not sure whether or not the edits you made will force a change in those views.

There is a twist to this: Users can force an immediate high-priority request to have a view refreshed by opening the view or by being in a view when they make a document change. Let’s look at an example.

Suppose Alice opens the Contact Management database to view #1 at 9:02 AM. Suppose also that the Update task conveniently ran at 9:00:00 and will run again at 9:15:00, 9:30:00, and so on. Alice updates a couple of documents at 9:05 AM. She creates new documents, edits existing ones, or maybe deletes documents. In any case, the server immediately updates view #1 because she is in that view. It would be strange if you deleted a document and still saw it in the view, right? So that view is updated right away. But the other views are queued for updating at 9:15 AM.

Now imagine that Bob is also in our Contact Management database at 9:02 AM. He is working in view #1. At 9:05 AM, he sees the blue reload arrow in the upper left corner, telling him that the view has been updated. He can press the F9 key or click the reload arrow, and the view will quickly display the updated contents. He doesn’t have to wait for a refresh.

Further, suppose Cathy is also in the database at 9:02 AM, but working in view #2. If she does not do anything to force a refresh in the interim (which is possible, but unlikely), then she sees the blue reload arrow at 9:15 AM when her view is refreshed by the Update task. More likely, though, Cathy makes some data updates of her own, scrolls through the view enough to force an update, or switches views. Any of these actions would force an immediate update.

At 9:15 AM, all the views that these clients have not already forced to refresh are refreshed, and we start all over again.

Additional information on view indexing

There are two additional pieces of information that we find helpful. The first is the full-text indexer. Typically, full-text indexes that are marked to update immediately are updated within seconds (or minutes if the server is busy). Full-text indexes that are hourly are updated (triggered) by the Update task at the appropriate interval. Full-text indexes that are daily are updated (triggered) by the Updall task when Updall runs at night.

The second additional point is that the developer can set indexing options in the view design properties. Many people misunderstand these to be settings that affect how the Update or Updall tasks will run. This is not so. The Update and Updall tasks will completely ignore these settings, updating views based solely on the criteria described in the preceding example.

The indexing options affect what happens when a client opens a view. If the options are set to manual, for example, then the user gets a (possibly stale) version of the view very quickly. If the options are set to "Auto, at most every n hours" and if it has been more than the specified time interval, then the user will have to wait a moment while the view is updated, just as if it were a regular view with Automatic as the view indexing option. We will discuss how these indexing options can be used to help craft useful views later in this article.

Quick tips on troubleshooting view indexing

In a well-tuned environment, indexing should be relatively transparent; views and full-text indexes should be updated quickly. In an environment experiencing performance problems, you may see the following symptoms:

1- Long delays when opening a database, opening a view, switching views, scrolling through a view, or saving a document
2- Delays when opening a document that uses lookups (In fact, you may see the form pause at certain points as it waits for these lookups to compute.)
3- Performance problems throughout the working day, but excellent performance during the off-hours
4- Out-of-date full-text indexes

The following logging and debugging Notes.ini parameters can help you address these issues:

Writes extra information to the log.nsf Miscellaneous Events view. This consists of one line of information every time the Update task starts updating a database, and a second line when the Update task finishes with that database. Each line has a time/date stamp, so by subtracting the times, you can get an approximate time (rounded to the nearest second) required to update that database.
Writes more information to the log. In addition to the data generated by log_update=1, this setting adds one line of information for each view in each database. Thus, a database with 75 views would have 77 lines -- one line to signal the start of the Update task against this database, 75 lines -- one for each view, and then a final line to denote the end of indexing in this database.

By far the most verbose way to collect information on indexing, debug_nif will write to a text file (which you specify using debug_outfile=c:\temp, for example). This can easily generate gigabytes of data per hour, so it should be used sparingly, if at all. The value of this debug variable is that it gives you the millisecond breakdown of all indexing activity, not just the Update task running every 15 minutes.
Used on your client machine, not the server, this will write verbose information to a text file (which you specify using debug_outfile=c:\temp, for example), breaking down every client action you perform. This can be used to determine, for instance, whether or not a delay that the client sees is caused by long wait times for indexing.

To troubleshoot view indexing problems, start by collecting some data from log.nsf by setting log_update=1 (if not already set) and allow the server to collect information on view updates for a day. Then review the log.nsf Miscellaneous Events view for that day and look for patterns. Some meaningful patterns that may emerge are:

1- Very long update times for a specific database. This may indicate that the database has too much data being updated, too many complex views, or time/date-sensitive views (if the server is release 5 or earlier). The logical next step is to examine the business use and design of that database and perhaps to use log_update=2 for a day to pinpoint which views are problematic in that database.
2- Very long update cycles. We’ve seen cycles that have lasted four to five hours, meaning that instead of passing through all modified databases every 15 minutes, the Update task may only do so two or three times every day. This may indicate general performance problems affecting all tasks, or it may indicate a very high user or data load on the server. The logical next step would be to assess the general state of the server and the business use on it.

Whenever you find long update times, it is helpful to note which other tasks are running concurrently and whether or not those tasks are behaving normally. It is also helpful to note if the slow indexing times are universal, only during business hours, or only during certain peak times, such as every two hours. It is rare that a single observation will give you all the information you need to solve your problems, but it usually puts you on the right path.

View performance testing

To test the performance of various features and ways of building a view, we created a database with 400,000 documents and ran a scheduled agent to update 4,000 documents every five minutes. We then built approximately 20 views, each with slightly different features, but each displaying all 400,000 documents, using five columns. We will list the details later, but the big picture looks like this:

1- The size of your view will correlate strongly with the time required to rebuild and refresh that view. If your view doubles in size, so too will the time to rebuild or refresh that view. So, for example, if you have a 100 MB database that you expect will double in size every six months, and if you find that indexing takes approximately 30 seconds now, then you should anticipate that indexing will be 60 seconds in six months, then 120 seconds six months later, and so on.
2- The biggest "performance killers" in terms of view indexing are categorized columns. Sort columns add a small amount of overhead as do some other features, such as the Generate unique keys option (more on this in the tips at the end of the article).

Figures 1 and 2 demonstrate both the relationship between size and refresh time and the significant overhead that categorized columns add to your view performance. Figure 1 plots view index size against response time.

Figure 1. View index size versus response times

And Figure 2 shows view size compared to refresh times.

Figure 2. View size versus refresh time

In both charts, the first and third categorized views are initially expanded, and the second and fourth views are initially collapsed. Collapsing your categorized views shows a small, but discernible savings in refresh time.

Reader Names field

Over the past few years, we have seen a dramatic increase in the number of critical situations related to Reader Names fields and views. Customers find that performance is unacceptable frequently throughout the day. They cannot understand why because the code reviews and pilot tests all went smoothly. But a few months after rollout to their entire department/division/company, they find that complaints about performance are dominating conversations with their employees, and something has to be done.


Our favorite examples are HR and Sales Force applications -- both of which typically have stringent Reader Names controls. The following table illustrates a hypothetical scenario for a company using a database of 400,000 documents.

Title/role Number of documents user can see (out of 400,000) Percent of database
Corporate HQ, CEO, CIO, Domino administrators, developers400,000100 percent
District Manager4,0001 percent
Manager4000.1 percent
Employee400.01 percent

Upper management and the Domino administrators and developers typically experience pretty good performance, which makes the first reports of poor performance easy to ignore. Performance problems are typically exacerbated by weaker connectivity (WAN versus LAN), which may also make the early complaints seem invalid. But after some critical number of complaints have been registered, someone sits down at a workstation with an employee and sees just how long it takes to open the database or to save a document, and then the alarm bells start ringing. Figure 3 shows how long it took a user to open a view in the sample database with 400,000 documents. All views were already refreshed. No other activity was taking place on the test server, and only one user at a time was accessing the server.

Figure 3. Time required to open a view in the sample database

Users are denoted by the percentage of documents they can see. If you cannot see a bar, it means that it’s very close to zero.

Before moving on to an explanation of Reader Names fields and performance, we want to leave you with one thought about Figure 3: You can quickly see why users of the flat (sorted, not categorized) views would have such radically different impressions of the performance of this application if they were 0.01 percent users compared to 100 percent users.

Understanding Reader Names

When an application uses Reader Names, it means that some or all documents have Reader Names fields -- that is, fields that are Summary Read Access. Summary means that the data can be displayed in views. Technically, you can create Reader Names fields and prevent them from having the summary flag, but then Lotus Domino cannot properly ensure security of the documents at the view level. Read Access means just that -- who can see this document? Typically, individual names, group names, and [roles] populate these kinds of fields. We encourage the use of [roles] over groups whenever possible for reasons of maintenance and control, but in terms of performance, these are all equivalent.

When a user opens a view that contains these reader access controlled documents, the server has to determine whether or not this user is allowed to see each document. The server is very methodical, starting at the top and checking each document. The process is similar to the following:

1- Lotus Domino examines document #1, determines that the user can see it, and then displays it.
2- Lotus Domino examines document #2, determines that the user can see it, and then displays it.
3- Lotus Domino examines document #3, determines that the user cannot see it, and then hides the document from the user.

The process continues until Lotus Domino examines all documents in the view. Of course, the view has to be refreshed first, otherwise the server wouldn’t know which document was #1 or what its Reader Names values were. Let’s pause in this explanation to consider a classic example of poor performance. Imagine for a moment that our database is a Sales Tracking application, and it has a By Revenue view that is sorted in descending order by size of contract. Imagine that the view has five columns with the first two columns being sorted (for example, Revenue and SalesRep). There are no categories. In this case, when a user opens the view (a view that potentially displays 400,000 documents), this user forces a refresh first, if needed, and then the server starts with document #1 and checks to see whether or not that document can be displayed to the user, then it checks documents #2, #3, and so on.

You are presumably reading this article via a Web browser, and you’re perhaps familiar with the fact that browsers render portions of a page quickly (text) and other portions (such as graphics) more slowly. But a Domino server does not serve up views this way. It doesn’t send any portion of the view data to the user until it has what it considers a full screen of information. For practical purposes, this may be, for instance, 60 rows of data. You can test this yourself with a Notes client by opening a flat view (no categorized columns) and pressing the down arrow key. At first, the view scrolls quickly. Then there is a pause. This is when the server is dishing up another few KB of data for you. Then the view scrolls quickly again for another 60 or so rows. This repeats as long as you hold down the arrow key.

Back to our user: It turns out that he can see document #1, but perhaps he cannot see the next 10,000 documents. Why? Maybe he can see only 40 documents in the whole database. Because the Domino server is not going to show him anything until it can build about 60 rows of information (or until it gets all the way to the bottom of the view), it’s going to be a long wait.

This is exactly what users experience. They wait for minutes sometimes, while the server checks each document. In the case of a user who can see so few documents that they don’t fill the screen, the server goes through the entire view before sending the user the partial screen of data that he is allowed to see.

Some readers are aware of this functionality. Some are now screaming in pain. You can imagine that as someone who routinely analyzes applications for performance problems that this kind of problem would be one that we spot quickly and look to resolve with workarounds.

One workaround is to make the view categorized. The reason this helps performance is that the categories will always be displayed, and they take up a row of data. So in our example of a view By Revenue, the user would quickly see dozens of categories displaying, say, Revenue and SalesRep. Clicking the twistie to open any category results in a quick round trip to the server to determine that the underlying document is not viewable for this user, but that’s a far cry from having to examine 400,000 documents in one go.

On the other hand, finding that virtually every twistie the user clicks expands, but doesn’t show any documents, may be frustrating. In our example, there are some other possible workarounds, but we would assert that a view showing all documents by revenue is probably not a view that users with access to only 0.01 percent of the database should have access to. It is a contradiction of their access. But there are many cases in which it makes business sense to have many users accessing a view that contains more data than they should see. At the bottom of this section, we list some tips for building fast views in databases with reader access controlled documents.

We have two closing thoughts about Reader Names and views. First, in a normal production environment, it’s rare to get such clear data as we have in Figure 3. You’re much more likely to see that the performance problems caused by Reader Names cause a host of other performance delays, such as agents, view indexing, and so on. These problems may cause other problems. For example, you may notice that the views are slow for everyone and deduce that the problem is view indexing -- when the underlying problem is that Reader Names are forcing long computations by the server when some users open views.

Second, our tests (and Figure 3) represent an absolutely best-case scenario. The reason is that we had only one user at a time. We could duplicate the volume of data and even indexing issues with our scheduled agents, but our tests could not show the spiraling performance problem that occurs when hundreds of users simultaneously try to access views with reader access controlled documents. If your application uses Reader Names, you absolutely have to pay attention to view performance if you want to avoid a crisis.

Performance enhancing tips with reader access controlled documents

The following are some tips for making applications/views that perform well even with reader access controlled documents:

1- Embedded view using Show Single Category. This is the winner, hands down. If your data is structured so that users can see all the documents in a category, then you can display just the contents of that category very quickly to the user. In some cases, it may make sense to let the user switch categories, in which case you have to consider whether or not he can see the contents of the other categories. But in most cases, the view would be something like My Sales and would show all the sales documents for the current user. The caveat for this kind of view is that the user interface for the Notes client is not quite as nice as the native view display. For Web browsers, it is just as good, and we have never seen a reason not to use this kind of view for Web browser applications. In fact, the performance is so good that it’s faster to open one of these with reader access controlled documents than to open a native view without reader access controlled documents!
2- Local Replicas. For a sales tracking database, many companies use local replication to ensure that their sales reps can use the database when disconnected. This is a great solution in many ways, but reader access controlled documents can be tricky when their Reader Names values change, and they need to disappear from some local replicas and appear on others.
3- Shared, Private on first use views. This is an elegant solution for Notes clients, but there are some drawbacks. First, it cannot be used for browsers. Second, the views need either to be stored locally, which can be problematic for some customers, or on the server, which can be a performance and maintenance problem of its own. Third, as the design of the application changes, it can be tricky updating the design of these (now) private views. And fourth, some customers have experienced performance problems, which may be related to having large numbers of these private views being created and used.
4- Categorized views. As seen in Figure 3, categorized views can be very fast to open with respect to Reader Names. They are bigger and slower to index, but typically they eliminate the Reader Names performance issue. The real caveat here is that users may find these views to be unfriendly, a label no one wants to have on their application.

The final tip concerns something to avoid. The feature "Don’t show empty categories" could, in theory, be used very successfully with the preceding tip to make a categorized view that would only display the categories containing documents that a user can see. However, in practice, it will result in a view with performance characteristics akin to a flat view, so it is probably a feature to avoid if performance is important.

General view performance tips

Here are some tips about view performance, regardless of whether or not reader access controlled documents are present.

Time/date formulas

Using @Now or @Today in a column or selection formula forces a view rebuild every time the view is refreshed. Therefore, if you use this kind of formula, consider making the view refresh Manually or "Auto, at most every n hours." That way, when users open the view, they will not have to wait for a view rebuild (or rarely so, in the latter case). The downside to this is that the view may be out-of-date because it opened without trying to refresh. Consider the contents of these views and how fast they change to determine whether or not you can use these indexing options safely.

Use click-sort judiciously

Click-sort is a brilliant feature, one that we use in most public views in our applications. But it’s worth checking your hidden lookup views to be sure that no columns have click-sort enabled because having it increases the disk space and indexing requirements without adding any functionality for your users. On this same topic, consider using ascending or descending, but not both, when you use this feature to save on disk space and indexing time without impairing functionality. We would argue that the functionality is preferable when there is only one arrow because it is an on-off toggle rather than a three-way toggle.

Generate unique keys in index

One of the lesser known performance enhancing features, the "Generate unique keys in index" option for lookup views can dramatically improve lookup times. When selected, the view discards all duplicate entries. For example, if you have 400,000 documents all displaying only CompanyName, and there are only 1,000 unique company names, then selecting this feature results in only the first document for each company being held in the view index. Although there is a slight overhead to this feature, it is easily outweighed by the fact that your view may dramatically decrease in size. One warning is that if you are displaying the results of a multi-value field, you need to use a code trick to avoid losing data.

For example, imagine a database has 100,000 documents, all containing a multi-value field called Industry. You want to look up a unique list of industries already selected so that the Industry drop-down list can display the values when users create new documents. If you use this unique keys feature in a view that has a single column displaying Industry, it is possible that a new document containing the values Automotive and Air would not display because another document containing Automotive was already being displayed in the view. Thus, your new value of Air would never be picked up by the lookup formula.

To avoid this problem, use a formula such as @Implode(Industry; "~") in the column formula, and then the drop-down field lookup formula may be @Explode(@DbColumn("Notes"; ""; "(LookupViewName)"; 1); "~"). Although you will have some minor duplication of data in your view, you will be guaranteed no data loss.

Color profiles

If you use color profiles (as in the mail template), then any change to that color profile document necessitates a view rebuild for any views that reference that color profile.

Default view

Whenever your application is opened, your default view is opened. Always be sure that this is a view that opens quickly. An interesting example of this is your mail file. For many users, the view that opens by default is the Inbox (folder). If a user is not in the habit of cleaning out her Inbox and if it has many thousands of documents, then it will be slower to open her mail file than if she regularly cleaned out her Inbox and if the view/folder had only dozens or hundreds of documents. Incidentally, this is why some companies have a policy that old documents in the Inbox are deleted periodically. It forces users to move those documents into other folders, thus improving performance both for that user and the server.


The conclusion we hope you draw from this article series is that application performance is something you can influence. With thoughtful application of common sense (and perhaps some of the information and tips in this article), you can keep the performance of your applications at a level that is more than acceptable for your users.

By : IBM

preload preload preload