Best Practices: What is a best backup plan on BO 4.0

Hi Experts!
I work with BO since 2007. I worked a lot with BO XI 3.1 and now with BI 4.0. I always have a question about the way to make a backup: What is a best backup plan on BO 4.0.
I know de many way to do this, but how I'm a consultant developer and backup usually is not my responsibility, but always I have to advise my clients and users.
The little summary of ways I know on BI 4.0:
- Stop the services and do a backup of repository database and FileStore folder (eventually include the TomCat folder and others BO installation folders)
- Create a job on LCM and a schedule to export a LCMBIAR file
- Use the Upgrade Management Tool to generate a BIAR file by the command line
I found that interesting post of Raphael Branger, but his the best option is to use the 360View, but that software I don't know, and the clients usually want to use the SAP solutions, so the preference is use the BO's way to make backup.
Backup & Recovery in BO 4.0
Note: I agree with Raphael about the old Import Wizard, I don't know why Upgrade Management Tool don't allow to import a biar file that same version of target. It is terrible.
Let me make a the big question: What is a best backup plan on BO 4.0?
I know that this depends of the environment and the many variables, but let us consider the general environment, the standard installation.
Thanks everybody!

Thanks Mrinal and Ajay,
On my experience I always use the full-backup: repository database backup + filestore folder backup (usually I recommend include BO folder installation and TomCat folder too because the custom configurations). That backup is essential, is basic.
But this backup is not flexible.The usual problems on BO's production enviroment is accidental deletion of some reports or objects of BO. Since BO XI R2 I used the "Import Wizard" to generate BIAR files by the command line, I usually create a BAT file with command line to create thats files, however BO 4 Import Wizard was died, now exists "Upgrade Management Tool", but I can create BIAR files by the command line too. Let's suppose a case that the BO user has deleted a report and that user did notified that deletion after 1 month. We don't need to restore all objects of the full-backup of 1 month ago, with BIAR files, we can restore only that report. Thats is the advantage of using BIAR files.
So, my strategy is use the full-backup (repository database + BO installation folder) and create BIAR files.
What do you think about the backup by generating BIAR files?

Similar Messages

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Best practice for taking Site collection Backup with more than 100GB

    Hi,
    I have site collection data is more than 100 GB. Can anyone please suggest me the best practice to take backup?
    Thanks in advance....
    Regards,
    Saya

    Hi
    i think Using powershell script we can do..
    Add this command in powershell
    Add-PSSnapin Microsoft.SharePoint.PowerShell
    Web application backup & restore
    Backup-SPFarm -Directory \\WebAppBackup\Development  -BackupMethod Full -Item "Web application name"
    Site Collection backup & restore
    Backup-SPSite http://1632/sites/TestSite  -Path C:\Backup\TestSite1.bak
    Restore-SPSite http://1632/sites/TestSite2  -Path C:\Backup\TestSite1.bak -Force
    Regards
    manikandan

  • BEST Practice for Balance sheet and Investment Planning

    hi all
       I need the Best Practice guides for Balance sheet and Investment Planning
    where can i get it ?
    if someone has them please could you mail them to [email protected]
       also if someone has documentation regarding these please forward them also
    Points assured
    Thanks in advance
    Nidhi

    Hi Nidhi,
    Refer to the below link for the available scenarios
    http://help.sap.com/bp_biv335/BI_EN/html/Bw.htm

  • TDMS & Diadem best practices: what if my signal has pauses/breaks?

    I built a LV2011 datalogging application that stores lots of data in TDMS files.  The basic architecture is like this:
    Every channel has these properties:
         To = Start time
         dt =  Sampling interval
    Channel values:
         1D array of DBL values
    After datalogging starts, I just keep appending to the channel values.  And if the TDMS file size goes over 1 GB, I create a new file and start over.  The application runs continuously for days/weeks, so I get a lot of TDMS files.
    It works fine.  But now I need to change my system to allow the data acquisition to pause/restart.  This means there will be breaks in the signal (probably between 30 sec and 10 mins).  I had originally considered recording two values for every datapoint (value & timestamp) like an XY graph.  But I am opposed to this in principal because I feel it fills up your hard disk unnecessarily (twice us much disk footprint for the same data?).
    Also, I have never used Diadem, but I want to ensure that my data can be easily opened and analyzed using Diadem.
    My question:  Are there some best practices for storing signals that pause/break like this?  I would like to just start a new recording with a new start time (To) and have Diadem be able to somehow "link" these signals ... eg have it know that its a continuation of the same signal.
    Obviously, I should install Diadem and play with it.  But I thought I would ask the experts about best practice first, since I have zero Diadem knowledge.
    Solved!
    Go to Solution.

    Hi josborne,
    Are you planning on creating a new TDMS file each time the acquisition stops and starts again, or were you wanting to store multiple start/stop sections withing the same TDMS file?  The easiest way to handle the date/time offset is to store one waveform per Channel per start/stop section and use the "wf_start_time" Channel property that is native to TDMS waveform data-- whether you're wiring an orange floating point array or a brown waveform to the TDMS Write.vi.  DIAdem 2011 has the ability to easily access the date/time offset when it is stored in this Channel property (assuming that it is stored as a date/time and not as a DBL or a string).  If you only have one start/stop section per TDMS file, I would definitely also add a "DateTime" property to File level.  If you want to store multiple start/stop sections in a single TDMS file, I would recommend using a separate Group for each start/stop section.  Make sure you're storing the following Channel properties in the TDMS file if you want that information to flow naturally into DIAdem:
    "wf_xname"
    "wf_xunit_string"
    "wf_start_time"
    "wf_start_offset"
    "wf_increment"
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments

  • Best Practice - What a complete PO means?

    Hi! I would like to determine the best practice to managing the PO process and ensuring that the PO is in a state ready for archiving in the future (with minimum efforts to correct the PO).
    Is it fair to say that a PO is considered 'complete' if the Complete Delivery indicator and invoiced quantity equals that of the final goods receipted quantity? Or do we have to be more stringent in the definition in that the goods receipted and invoice quantity has to match that of the ordered quantity?
    For example:
    PO Qty = 5
    GR Qty = 4 (complete delivery indicator turned on)
    IR Qty = 4
    In the above example, is the PO considered complete? The reason why I ask is because I understand the PO can only be archived if it is set with Deletion Flag, and when I manually try to delete the PO, an error is output indicating that the quantity delivered is smaller than the quantity ordered (assuming no under delivery tolerance is maintained here). So, in this case, is it good practice to always update the PO quantity to match that of the final receipted quantity?
    Appreciate your advice on the above.
    Cheers!
    SF

    Hi,
    If PO qty is 5 & GR qty is 4..
    you can flag the PO as Delivery completed..
    Logically the PO will be considered as Closed for procurement..
    It is Some times not possible to change the PO qty to GR qty..bcz basing on the Settings.. it may trigger again release strategy also..
    Basing on your business requirement either u can go for changing the PO qty to GR qty
    or marking the Delivery completed..
    Thx
    Raju

  • CSIs best practice to get the best oracle deals

    One company acquired multiple small companies, now time to re-new oracle support , we have 7 CSIs and paying for all of them, 6 of them with quantity 1-2 (and not sure even anybody using it) , any way to check if someone using that CSIs?
    one with quantity 50 (Oracle Database Enterprise Edition) >>this one only DBAs need and using it....
    What the best practice in this case to utilize/get best support @ best price? My idea is to just consolidate all CSIs in one and ask for oracle any discounts...make me right if i am not, new to this topic.....

    I think you can go on MOS for each of those CSI's and create an administrator who can look to see what has been used (if you are asking how to see if anyone is using support). Note that this can be completely misleading, as someone may be using one CSI to, say get patches for all the supported environments. You might also think who needs to get notified, support is pretty inflexible about it.
    I someone disagree with what ajallen said about the purchasing department doing these negotiations, as I have seen, er, a wide variety of ability there. It might be worth it to get one of those license specialist companies, though I wouldn't know which are good. From what I've seen, Oracle will try to charge you to desupport unused licensing, and sometimes do other nasty things if you have hardware on their poop-list (think hp). On the pollyana side, if you can dangle a potential database consolidation sale in front of them, they might play nice.

  • Need best practices PROCESS to schedule RMAN Backups.

    Hi All,
    I would like a suggestion on the following for RMAN Backups:
    Details:
    Database: - 11gR2, Size 3TB on ASM - DW database.
    Like suggestions on:
    1) What kind of backups to schedule - Incremental along with Block backups?
    2) Size required to allocate for the Backup Space - Can it be ASM or Disk Space?
    3) Anything else - please suggest.
    Thank you.

    For that size, you might do weekly L0 and daily L1 (differential) backups.  Try to do L1 (cumulative) backups if possible.
    Getting the total sizes of Segments (from DBA_SEGMENTS) will give you an approximation for the minimum size.  (Tables/Indexes that have been dropped and are no longer in DBA_SEGMENTS wouldn't appear but the underlying blocks for them if unused by other Tables/Indexes would still be backed up as they'd have been formatted).
    You could run the Backup as a COMPRESSED BACKUPSET to reduce the size.
    Whether you want ASM or FileSystem depends on your comfort level.  If you run backups to ASM, you can only use RMAN to backup from ASM to Tape.  However, backups on FileSystem can be copied to tape using any other method (tape backup utility).  Also consider if you want to replicate/copy the backups to other servers and what methods you'd have available.
    Hemant K Chitale

  • What is the best backup plan for Archive Databases in Exchange 2013?

    Hi,
    We have Exchange 2013 with Hybrid setup with O365.
    We have On premise exchange 2013 servers with 3 copies of primary Database & Single Copy of Archival DBs.
    Now we have to frame backup policy with Symantec Backup Exec which has to backup our primary & Archival DBs
    In 2007 exchange, before migration to 2013, we had policy of DBs - Weekly Full backup & Monthly Full Backup
    Please suggest what would be the best possible backup strategy we can follow with 2013 DBs.
    That too, especially for Archiving DBs
    Our Archiving Policy are - 3 category - Any emails older than 6 month OR 1 Year OR 2 Year should go to Archive mailbox.
    Keeping this in mind how to design the backup policy ? 
    Manju Gowda

    Hi Manju,
    you do not find best practice different from the common backup guidelines, as there is no archive db specific behaviour. Your users may move items to their archive at any time as well as your retention policies may move items that machted the retention policies
    at any time. The result is frequently changing content to both, mailbox and archive mailbox databases, so you need to backup both the same way. You also may handle archives together with mailboxes together in the mailbox db 
    Please keep in mind that backup usually means data availability in case of system failure. So you may consider to do a less frequent backup with your archive db with dependency to the "keep deleted items" (/mailboxes) setting on your mailbox database.
    Example:
    keep deleted items: 30 days
    backup of archive db: every 14 days
    restore procedure:
    * restore archive DB content
    * add difference from recover deleted items (or Backup Exec single item recovery) for the missing 14 days.
    So it depends more on your process than on a backup principle.
    Regards,
    Martin

  • CO-PA: product hierarchy predefined from best practice

    Hi guys,
    unfortunately I have as basis a best practice system, where product hierarchy with 3 levels is already defined in CO-PA:
    PAPH1     ProdHier01-1     ProdH01-1     CHAR     5     MVKE     PAPH1
    PAPH2     ProdHier01-2     ProdH01-2     CHAR     10     MVKE     PAPH2
    PAPH3     ProdHier01-3     ProdH01-3     CHAR     18     MVKE     PAPH3                 
    PRODH     Prod.hierarchy     Prod.hier.     CHAR     18     MVKE     PRODH
    Those ones are already assigned to a best practice operating concern:
    10UK     Best Practices
    1. we have 7 levels in our product hierarchy
    2. I need to read it from mara and not mvke
    When trying to ad product hierarchy as characteristic in KEA5 in our new operating concern, I gut the message below.
    This data element is used in tons of tables and the whole precdure looks risky + after transport I might have issues again?
    Creating ww* characteristics for product hierarchy is not an option, since then I would need to maintain the whole product hiearchy as characteristc values again (like before release 4.5) and this is far more than 1000 entries and is double maintenance after go live.
    Deleting the best practice operating concern is also difficult, since there are several clients where customizing still sits for 10UK.
    Anybody experience? What did you do?
    regards
    Bjoern
    Here is the text from KEA5 when trying to ad mara fields (since date elment lenghts is different due to more levels):
    Product hierarchy field PAPH1 cannot be adapted to the new structure
    Message no. KE691
    Diagnosis
    The definition of the product hierarchy has changed (structure PRODHS). The hierarchy field generated in CO-PA, PAPH1 cannot be adapted to the new definition, because it is already being used in structures or tables.
    System Response
    The field PAPH1 does not appear in the list of fields that can be selected.
    It can no longer be used as a characteristic.
    Procedure
    If you still want to define hierarchy field PAPH1 as a characteristic, proceed as follows:
    1. Display the where-used list for data element RKEG_PAPH1 in database fields using transaction SE12. For hierarchy field PAPH1 to be adapted, a data conversion must be carried out for all the tables listed!
    2. First change the number of characters in the domain RKEG_PAPH1 to match the new definition of the product hierarchy, following the example below. You can do this using transaction SE11.
    Structure PRODHS     Domain
    Field    Length      name       Number of characters
    PRODH1     3         RKEG_PAPH1      3
    PRODH2     2         RKEG_PAPH2      5
    PRODH3     5         RKEG_PAPH3     10
    PRODH4     8         RKEG_PAPH4     18

    Just as Info,
    I needed to go for deletion of those characteristics - quite some work, like:
    1. delete all where-used entries for the characteristics in ALL CLIENTS of this system (important: e.g. planning layout - recommended to change and not simply delete, because the change can be transported)
    2. when then trying to delete characteristics (with "unlock" in KEA0 > value fields) - tons of tables and strucutres popped up, where this data element is still needed - follow note 353257
    3. generate best practice operating concern new (10UK in our case)
    4. create PAPH1 etc. new from table MARA (not mvke in our case - depends)
    all good - hopefully no issues after transport - we will see.

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • RD Session Host lock down best practice document

     
    Hello,
    I am currently working on deploying an RDS Farm. My farm has several RD Session host servers. Today I learned that you can do some bad things to the RD Session hosts, if a user presses
    CTRL + Alt + End when having a open session. I locked all of this down using different GPOs which include disabled access task manager, cmd, locking the server, reboot and shutdown etc.
    However, this being sad how would I know what else to lock down since I am new to this topic. I tried to find some Microsoft document about best practices what should be locked down but I wasn’t
    successful and unfortunately a search in the forum did not bring up anything else.
    With all the different features and option Windows Server 2008 R2 has I do not even know where to start.
    Can some please point me into the right direction.
    Thank you
    Marcus

    Hi,
    The RD Session host  lock down best practices of each business is different, every enterprise admin can only to find the most suitable for their own solutions based on their IT infrastructure.
    I collected some resource info for you.
    Remote Desktop Services: Frequently Asked Questions
    http://www.microsoft.com/windowsserver2008/en/us/rds-faq.aspx
    Best Practices Analyzer for Remote Desktop Services
    http://technet.microsoft.com/en-us/library/dd391873(WS.10).aspx
    Remote Desktop Session Host Capacity Planning for 2008 R2
    http://www.microsoft.com/downloads/details.aspx?FamilyID=CA837962-4128-4680-B1C0-AD0985939063&displaylang=en   
    RDS Hardware Sizing and Capacity Planning Guidance.
    http://blogs.technet.com/iftekhar/archive/2010/02/10/rds-hardware-sizing-and-capacity-planning-guidance.aspx
    Technical Overview of Windows Server® 2008 R2 Remote Desktop Services
    http://download.microsoft.com/download/5/B/D/5BD5C253-4259-428B-A3E4-1F9C3D803074/TDM%20RDS%20Whitepaper_RC.docx
    Remote Desktop Load Simulation Tools
    http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=c3f5f040-ab7b-4ec6-9ed3-1698105510ad
    Hope this helps.
    Technology changes life……

  • Best Practices for AD and Windows Environment

    Hello Everyone,
    I need to create a document having the best practices for AD containing best practices for DNS, DHCP, AD Structure, Group Policy, Trust Etc.
    I just need the best practices irrespective of what is implemented in our company.
    I just need to create a document for analysis as of now. I searched over the internet but could not find much. I would request you all to pour in your suggestions from where i can find those.
    If anyone could send me or point me the link. I am pretty new to the technology, so need your help.
    Thanks in Advance

    I have an article where I shared the best practices to use to avoid known AD/DNS issues: http://www.ahmedmalek.com/web/fr/articles.asp?artid=23
    However, you need first to identify your requirements and based on these requirements, you can identify what should be implemented on your environment and how to manage it. The basics here is that you need to have at least two DC/DNS/GC servers per AD domain
    for the High Availability. You need also to take a system state backup of at least one DC/DNS/GC server in your domain. As for DHCP, you can use 50/50 or 80/20 DHCP rule depending on your setup.
    You can also refer to that: https://technet.microsoft.com/en-us/library/cc754678%28v=ws.10%29.aspx
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

  • Database Administration - Best Practices

    Hello Gurus,
    I would like to know various best practices for managing and administering Oracle databases. To give you all an example what I am thinking about - for example, if you join a new company and would like to see if all the database conform to some kind of standard/best practices, what would you look for - for instance - are the control files multiplexed, are there more than one member for each redo log group, is the temp tablespace using TEMPFILE or otherwise...something of that nature.
    Do you guys have some thing in place which you use on a regular basis. If yes, I would like to get your thoughts and insights on this.
    Appreciate your time and help with this.
    Thanks
    SS

    I have a template that I use to gather preliminary information so that I can at least get a glimar of what is going on. I have posted the text below...it looks better as a spreedsheet.
    System Name               
    System Description               
         Name      Phone     Pager
    System Administrator               
    Security Administrator               
    Backup Administrator               
    Below This Line Filled Out for Each Server in The System               
    Server Name               
    Description (Application, Database, Infrastructure,..)               
    ORACLE version/patch level          CSI     
              Next Pwd Exp     
    Server Login               
    Application Schema Owner               
    SYS               
    SYSTEM               
         Location          
    ORACLE_HOME               
    ORACLE_BASE               
    Oracle User Home               
    Oracle SQL scripts               
    Oracle RMAN/backup scripts               
    Oracle BIN scripts               
    Oracle backup logs               
    Oracle audit logs               
    Oracle backup storage               
    Control File 1               
    Control File 2               
    Control File 3                    
    Archive Log Destination 1                    
    Archive Log Destination 2                    
    Datafiles Base Directory                    
    Backup Type     Day     Time     Est. Time to Comp.     Approx. Size
    archive log                    
    full backup                    
    incremental backup                    
    As for "Best" practices, well I think that you know the basics from your posting but a lot of it will also depend on the individual system and how it is integrated overall.
    Some thoughts I have for best practices:
    Backups ---
    1) Nightly if possible
    2) Tapes stored off site
    3) Archives backed up through out day
    4) To Disk then to Tape and leave backup on disk until next backup
    Datafiles ---
    1) Depending on hardware used.
    a) separate datafiles from indexes
    b) separate high I/O datafiles/indexes on dedicated disks/lungs/trays
    2) file names representative of usage (similar to its tablespace name)
    3) Keep them of reasonable size < 2 GB (again system architecture dependent)
    Security ---
    At least meet DOD - DISA standards where/when possible
    http://iase.disa.mil/stigs/stig/database-stig-v7r2.pdf
    Hope that gives you a start
    Regards
    tim

  • Best Practice question - null or empty object?

    Given a collection of objects where each object in the collection is an aggregation, is it better to leave references in the object as null or to instantiate an empty object? Now I'll clarify this a bit more.....
    I have an object, MyCollection, that extends Collection and implements Serializable(work requirement). MyCollection is sent as a return from an EJB search method. The search method looks up data in a database and creates MyItem objects for each row in the database. If there are 10 rows, MyCollection would contain 10 MyItem objects (references, of course).
    MyItem has three attributes:
    public class MyItem implements Serializable {
        String name;
        String description;
        MyItemDetail detail;
    }When creating MyItem, let's say that this item didn't have any details so there is no reason to create MyitemDetail. Is it better to leave detail as a null reference or should a MyItemdetail object be created? I know this sounds like a specific app requirement, but I'm looking for a best practice - what most people do in this case. There are reasons for both approaches. Obviously, a bunch of empty objects going over RMI is a strain on resources whereas a bunch of null references is not. But on the receiving end, you have to account for the MyItemDetail reference to be null or not - is this a hassle or not?
    I looked for this at [url http://www.javapractices.com]Java Practices but found nothing.

    I know this sounds like a specific apprequirement,
    , but I'm looking for a best practice - what most
    people do in this case. It depends but in general I use null.Stupid.Thanks for that insightful comment.
    >
    I do a lot of database work though. And for that
    null means something specific.Sure, return null if you have a context where null
    means something. Like for example that you got no
    result at all. But as I said before its's best to
    keep the nulls at the perimeter of your design. Don't
    let nulls slip through.As I said, I do a lot of database work. And it does mean something specific. Thus (in conclusion) that means that, in "general", I use null most of the time.
    Exactly what part of that didn't you follow?
    And exactly what sort of value do you use for a Date when it is undefined? What non-null value do you use such that your users do not have to write exactly the same code that they would to check for null anyways?

Maybe you are looking for