Use journaling to restore to specific date?

Is there any way to use journaling to restore the system to a specific date?
thanks, howard

Hello Howard.
None that I'm aware of.
See Mac OS X: About file system journaling for information.

Similar Messages

  • Restore to specific date via icloud?

    I synced iphone to PC.  i wanted to sync contacts FROM iphone TO PC, but instead it wiped out my iphone contacts and put in the PC contacts.  Can I restore my iphone to how it was 7 days ago via icloud?  It backs up every night, but i don't want it from current back up, I want it from about 7 dys ago before i synced.

    Apple claim to keep the last three back ups, I'm not sure of this myself, but either way, if you have backed up 7 times since losing the contacts, it seems that you can no longer recover them.

  • Using journalized data in an interface with aggragate function

    Hi
    I am trying to use the journalized data of a source table in one of my interfaces in ODI. The trouble is that one of the mappings on the target columns involves a aggregate function(sum). When I run the interface i get an error saying "not a group by expression". I checked the code and found that the jrn_subscriber, jrn_flag and jrn_date columns are included in the select statement but not in the group by statement(the group by statement only contains the remiaining two columns of the target table).
    Is there a way around this? Do I have to manually modify the km? If so how would I go about doing it?
    Also I am using Oracle GoldenGate JKM (oracle to oracle OGG).
    Thanks and really aprreciate the help
    Ajay

    'ORA-00979' When Using The ODI CDC (Journalization) Feature With Knowledge Modules Including SQL Aggregate Functions [ID 424344.1]
         Modified 11-MAR-2009 Type PROBLEM Status MODERATED      
    In this Document
    Symptoms
    Cause
    Solution
    Alternatives :
    This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process, and therefore has not been subject to an independent technical review.
    Applies to:
    Oracle Data Integrator - Version: 3.2.03.01
    This problem can occur on any platform.
    Symptoms
    After having successfully tested an ODI Integration Interface using an aggregate function such as MIN, MAX, SUM, it is necessary to set up Changed Data Capture operations by using Journalized tables.
    However, during execution of the Integration Interface to retrieve only the Journalized records, problems arise at the Load Data step of the Loading Knowledge Module and the following message is displayed in ODI Log:
    ORA-00979: not a GROUP BY expression
    Cause
    Using both CDC - Journalization and aggregate functions gives rise to complex issues.
    Solution
    Technically there is a work around for this problem (see below).
    WARNING : Oracle engineers issue a severe warning that such a type of set up may give results that are not what may be expected. This is related to the way in which ODI Journalization is implemented as specific Journalization tables. In this case, the aggregate function will only operate on the subset which is stored (referenced) in the Journalization table and NOT over the entire Source table.
    We recommend to avoid such types of Integration Interface set ups.
    Alternatives :
    1.The problem is due to the missing JRN_* columns in the generated SQL "Group By" clause.
    The work around is to duplicate the Loading Knowledge Module (LKM), and in the clone, alter the "Load Data" step by editing the "Command on Source" tab and by replacing the following instruction:
    <%=odiRef.getGrpBy()%>
    with
    <%=odiRef.getGrpBy()%>
    <%if ((odiRef.getGrpBy().length() > 0) && (odiRef.getPop("HAS_JRN").equals("1"))) {%>
    ,JRN_FLAG,JRN_SUBSCRIBER,JRN_DATE
    <%}%>
    2. It is possible to develop two alternative solutions:
    (a) Develop two separate and distinct Integration Interfaces:
    * The first Integration Interface loads data into a temporary Table and specify the aggregate functions to be used in this initial Integration Interface.
    * The second Integration Interfaces uses the temporary Table as a Source. Note that if you create the Table in the Interface, it is necessary to drag and drop the Integration Interface into the Source panel.
    (b) Define two connections to the Database so that the Integration Interface references two distinct and separate Data Server Sources (one for the Journal, one for the other Tables). In this case, the aggregate function will be executed on the Source Schema.
    Show Related Information Related
    Products
    * Middleware > Business Intelligence > Oracle Data Integrator (ODI) > Oracle Data Integrator
    Keywords
    ODI; AGGREGATE; ORACLE DATA INTEGRATOR; KNOWLEDGE MODULES; CDC; SUNOPSIS
    Errors
    ORA-979
    Please find above the content from OTN.
    It should show you this if you search this ID in the Search Knowledge Base
    Cheers
    Sachin

  • How to restore Project from repository for specific date ?

    Hello,
    I have Web Dyn Pro Project. Some files were deleted in project and were checked in to repository.
    Could you please describe easy steps how to restore project state for specific date ?

    The only way you could do that was if you used Time Machine on the computer to make backups. You could simply store the mobile backups folder on the day you wanted and then restore the iPhone.
    I've done this successfully in the past.

  • Restore from Time Machine SINCE a specific date

    Guys,
    My laptop has been in the shop for a couple of months (don't ask!!), and so I requisitioned my daughter's laptop to use over this time - I set it up as a duplicate of mine by restoring the whole backup from Time Machine to her laptop at the beginning, and have subsequently been backing up to Time Machine from her laptop to my sparsebundle.....so everything is up to date.
    I have my original laptop back today, so would like to restore from backup the work I have done on her machine over the past few weeks. Without knowing specifically what this is (I know you can select the individual files or folders to restore, if you know what they are) is there a way I can restore only files and folders changed since my laptop went into the shop - ie only restore files and folders changed since a specific date - without restoring the whole thing?
    I'm using Leopard.

    Hendrixxx wrote:
    Thanks Pondini. That's what I ended up doing....I was just wondering if there was a faster way of doing it - was just trying to save some time - transferring only the files that had changed, rather than the whole shebang.
    Yes, that might be a good option. I don't know whether Apple never thought of it (or didn't think it was worth the effort).
    It might have been intentional, though. There is a UNIX command (touch) that can be used to alter the last access and/or last modified dates. I don't think it's actually used much, but it could certainly put a "kink" in such a scheme!

  • My iPod dispalys this message: "Connect to a computer. Use iTunes to restore."  When I connect to a computer, iTunes doesn't recognise it.  The iPod 5th Gen does not work at all. How do I fix my iPod without losing any data?

    My iPod dispalys this message:
    "Connect to a computer. Use iTunes to restore."  When I connect to a computer, iTunes doesn't recognise it.  The iPod 5th Gen does not work at all. How do I fix my iPod without losing any data?
    I have seen the Apple webpage suggesting the "five R's" (including restore).  Before doing anything, I want to be sure that restoring or any other action is not going to delete or risk my data. 
    I would be very grateful if anyone can suggest a fix for this problem, in a way that will not risk my data. 
    Thanks in advance!

    Did you try a hard reset with the iPod still connected to your PC?  Have you tried multiple USB ports, preferably high powered USB 2.0 ports?
    To do a hard reset, first make sure the hold switch is in the Off position, then press and hold both the Select (Center) and Menu buttons together long enough for the Apple logo to appear.
    Have you also worked through each and every single troubleshooting suggestion in this Apple support document?
    iPod not recognized in 'My Computer' and in iTunes for Windows
    B-rock

  • I just bought a brand new iPhone 4s from a Three store. When I used the cloud restore to put the data from my old 3gs onto the device, it's prompting me for an Apple ID login with an email address that isn't mine!

    I just got a brand new iPhone 4S on a contract from my local Three store. When I used the cloud restore to put the data from my old 3GS onto the device, it's prompting me for an Apple ID login with an email address (hotmail) that isn't mine! My old phone has NEVER had any Apple ID other than mine log into it, and the new one's a sealed box that I've opened and set up from scratch. Can anyone please explain this? I'm worried there's some kind of problem with my iCloud data being mixed with someone else's or something... Since I logged in with my own ID, it's downloaded the data perfectly fine, but I just got the Apple ID prompt again a moment ago with this same hotmail address in it... I'd be very grateful for any explanation!

    In iTunes on your computer, select the Purchased section under STORE in the left column. Click on the first track and choose "Get Info" from the "File" menu.
    In the "Summary" tab you'll see details of the purchaser of the track. Click Next to go through all your purchased music one by one until you find the ones you need to get rid of.
    Unfortunately, there are no smart playlist rules that can filter tracks by the purchaser that I know of.

  • How can I use Automator to extract specific Data from a text file?

    I have several hundred text files that contain a bunch of information. I only need six values from each file and ideally I need them as columns in an excel file.
    How can I use Automator to extract specific Data from the text files and either create a new text file or excel file with the info? I have looked all over but can't find a solution. If anyone could please help I would be eternally grateful!!! If there is another, better solution than automator, please let me know!
    Example of File Contents:
    Link Time =
    DD/MMM/YYYY
    Random
    Text
    161 179
    bytes of CODE    memory (+                68 range fill )
    16 789
    bytes of DATA    memory (+    59 absolute )
    1 875
    bytes of XDATA   memory (+ 1 855 absolute )
    90 783
    bytes of FARCODE memory
    What I would like to have as a final file:
    EXCEL COLUMN1
    Column 2
    Column3
    Column4
    Column5
    Column6
    MM/DD/YYYY
    filename1
    161179
    16789
    1875
    90783
    MM/DD/YYYY
    filename2
    xxxxxx
    xxxxx
    xxxx
    xxxxx
    MM/DD/YYYY
    filename3
    xxxxxx
    xxxxx
    xxxx
    xxxxx
    Is this possible? I can't imagine having to go through each and every file one by one. Please help!!!

    Hello
    You may try the following AppleScript script. It will ask you to choose a root folder where to start searching for *.map files and then create a CSV file named "out.csv" on desktop which you may import to Excel.
    set f to (choose folder with prompt "Choose the root folder to start searching")'s POSIX path
    if f ends with "/" then set f to f's text 1 thru -2
    do shell script "/usr/bin/perl -CSDA -w <<'EOF' - " & f's quoted form & " > ~/Desktop/out.csv
    use strict;
    use open IN => ':crlf';
    chdir $ARGV[0] or die qq($!);
    local $/ = qq(\\0);
    my @ff = map {chomp; $_} qx(find . -type f -iname '*.map' -print0);
    local $/ = qq(\\n);
    #     CSV spec
    #     - record separator is CRLF
    #     - field separator is comma
    #     - every field is quoted
    #     - text encoding is UTF-8
    local $\\ = qq(\\015\\012);    # CRLF
    local $, = qq(,);            # COMMA
    # print column header row
    my @dd = ('column 1', 'column 2', 'column 3', 'column 4', 'column 5', 'column 6');
    print map { s/\"/\"\"/og; qq(\").$_.qq(\"); } @dd;
    # print data row per each file
    while (@ff) {
        my $f = shift @ff;    # file path
        if ( ! open(IN, '<', $f) ) {
            warn qq(Failed to open $f: $!);
            next;
        $f =~ s%^.*/%%og;    # file name
        @dd = ('', $f, '', '', '', '');
        while (<IN>) {
            chomp;
            $dd[0] = \"$2/$1/$3\" if m%Link Time\\s+=\\s+([0-9]{2})/([0-9]{2})/([0-9]{4})%o;
            ($dd[2] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of CODE\\s/o;
            ($dd[3] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of DATA\\s/o;
            ($dd[4] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of XDATA\\s/o;
            ($dd[5] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of FARCODE\\s/o;
            last unless grep { /^$/ } @dd;
        close IN;
        print map { s/\"/\"\"/og; qq(\").$_.qq(\"); } @dd;
    EOF
    Hope this may help,
    H

  • Issue found in EHS in using specification data import using  process

    Dear EHS community
    Now using EHS classic for a long time a issue has been detected in EHS standard import. During maintenance of EHS data normally using CG02 the system is always using the default "Data origin" specified in customizing to be stored in EHS tables (e.g. ESTRH, ESTRI etc.). In standard process to import specification data one can define a different "Data origin". Now we are using an import file with default" data origin and executed the import. Now a strange effect has been detected (and not always) for update of identifiers. For the import purpose you must nominate at least one identifier. If the identifier is found then normally no update happens but only the value assignment data is inserted (or updated). If the identifier is not found it get be inserted on spec level in ESTRI. Now during the update the "Data origin" of the identifier present in the system (and which matched to identifier on file level) was changed but not the identifier as such. Any data record on value assignment level received the defautl data origin. Actually there is no explanation for this behaviour. If Default "Data origin" would be "SAP" (as the term) this value has been change to "Space". Any explanation of this effect is appreciated (or and idea regarding that).
    C.B.
    PS: analysis of change logs in EHS etc. executed so far clearly indictae that an "Update" happened on the identifier; but only the field SRSID is effected; EHS import is quite old and therefore very stable;
    PPS: I found a thread taking about the import file:
    spec import_inheritance data
    Example shown thre is like:
    +BS
    +BV   $ESTRH
    SRSID                          EH&S
    SUBID                          000000385000
    SUBCAT                         REAL_SUB
    AUTHGRP                        EHS_PS
    +EV
    +BV   $ESTRI
    SRSID                          EH&S
    IDTYPE                         NAM
    IDCAT                          EHS_OLD
    IDENT                          XY0002
    ORD                            0001
    +EV
    +BV   SAP_EHS_1013_001
    $ESTVA-SRSID                   EH&S
    SAP_EHS_1013_001_VALUE         N09.00101280
    +EV
    If you compare SAP helpt normally only at the"begining pof the file you will find "SRSID" Here this field is nominated often. On Level of ESTRH as well as ESTRI.
    PPS: e.g. refer to: TCG56 EHS: Data Origin - SAP Table - ABAP

    Dear Ralph
    first thanks for feedback. Regarding content provider: we need to check that on deeper level
    Regarding import may be my explanation was not good enough.
    Imagine this case:
    you have a specification in the system there you would like to add e.g. denisity data.. To do so you need at least one identifier which you must nominate during the import. As long as this identifier is "identical" in system and in the file this identifier should not be "changed/effected" etc. and only the additional data should be loaded, This was the process we used. Now we detected that this seems not to be the real effect. The identifier as part of the file is "updated" in EH&S. In the example above somebody used this logic:
    +BV   $ESTRI
    SRSID                          EH&S
    IDTYPE                         NAM
    IDCAT                          EHS_OLD
    IDENT                          XY0002
    ORD                            0001
    +EV
    Nearly the same is used in our process. Only difference ist, that we do not define the "SRSID". The same is true for any other data in the file. SRSID is nether specified.
    What is happing now:
    e.g. the "density" data is added with SRSID "EH&S". This effect is "normal". As by default SRSID should be EH&S (as this is defined as such in customizing) and because of the fact that at the top the ID is es well "EH&S". In the system we have the "XY0002" having SRSID EH&S. By using now this upload approach the only difference afterwards is tht kin th systeM; XY0002 does get "blank" as SRSID (and there is no data origin "blank" defined. Up to today my unertsanding was clearly: no update should happen in the identifier. This seems not to be the case. Is my understanding here different? Or is this SRSID in the load file is really mandatory in ESTRI level to avoid this effect. I hope that you can provide some feedback regarding this.
    C.B.
    PS: referring to: Example: Transfer File for Specifications - Basic Data and Tools (EHS-BD) - SAP Library
    The header of import file should look like:
    Comment
    +C
    Administrative section
    Character standard
    +SC
    ISO-R/3
    Identification (database name)
    +ID
    IUCLID
    Format version
    +V
    2.21
    Export date
    +D
    19960304
    Key date for export
    +VD
    19960304
    Set languages for export
    +SL
    E
    Date format
    +DF
    DD.MM.YYYY
    IN our case +ID = EH&S (as this is the value in export file)
    IN this example this additional one is shown:
    Begin table
    +BV
    $ESTRI
      Table field
    IDTYPE
    NAM
      Table field
    IDCAT
    IUPAC
      Table field
    IDENT
    anisole
      Table field
    LANGU
    E
      Table field
    OWNID
    ID1
    Therefore no SRSID is specified. And this is the data in our file (on high level) and the "only" change is that the identifer get "deleted" the SRSID

  • Using Time Machine, is it possible to back up a computer AFTER a specific date. I only want to back up files from AFTER January 2015. Is this possible, and how can I do this?

    Using Time Machine, is it possible to back up a computer AFTER a specific date. I only want to back up files from AFTER January 2015. Is this possible, and how can I do this?

    emilypdunne wrote:
    Using Time Machine, is it possible to back up a computer AFTER a specific date. I only want to back up files from AFTER January 2015. Is this possible, and how can I do this?
    Is what you're saying that you only want backups in TM of the state of your Mac from January 2015 to the current date and nothing before then? I'd think the simplest solution would just be to delete in TM all the backups prior to 1/15 and then let TM do its thing. The ultimate source for all things Time Machine is here. On this page, note the comment that "Time Machine will never delete the backup copy of anything that was on the disk being backed-up at the time of any remaining backup" which I understand to mean that if you delete all the backups in the TM file prior to 1/15, any file that wasn't present on your Mac as of 1/15 will no longer be represented in the TM file. Note that the second link also has instructions for how to delete TM files.
    If you were trying to accomplish something else, then as Emily Litella (Gilda Radner in SNL) used to say, "never mind."

  • Odbc Driver error when using specific date column

    I am receiving the this error:
    Odbc driver returned an error (SQLExecDirectW).
    Error DetailsError Codes: OPR4ONWY:U9IM8TAC:OI2DL65P State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 46036] Internal Assertion: Condition len <= desc->objectLength_, file ..\Oci8\Src\SQXDGOci8.cpp, line 614. (HY000)
    It happens when I try to to set criteria on a specific date column or I do not set criteria at all. If I set date column 1 to greater than 4 or so months back the report returns, along with all the dates from date column 2. But if I then try to set date column 2 as the criteria, no matter if I just ask for data from a week until today, I receive this error. I tried changing the Server Stack size to 512, but that did not help. Anyone have an idea on this???

    You seem to be using an ODBC connection with an Oracle database, is that correct? If so you should install the Oracle client and use the OCI driver instead and it is much faster and stable than ODBC.

  • When to use specific data structures

    Hi
    I'm currently learning Java and have found a wealth of information on data structures (link lists,queues et. al.) but I'm finding not much material on when best to apply specific data structures and the advantages and disadvantages. Could anyone point me to any resources?
    Second question as a Java developer do you actually use things like linked lists etc regularly?
    Thanks in advance

    I suppose that a wealth of information exists because data structures are an integral part of programming. In simpler terms, all that information exists because the answer to question two is yes, absolutley.
    The answer to question one is a bit trickier. It's kind of like asking when is best to use a flathead vrs. a phillips screwdriver. The answer seems obvious within a given context, but obscure outside of one.
    IMHO: The 'when' becomes clear with experience. Focus on the 'how' and you'll be ready when you are faced with a problem that calls for a particular type of solution.
    jch

  • Enforcing integrity using oracle specific data base commands .v. using fron

    Full subject : Enforcing integrity using oracle specific data base commands .v. using front end.
    It ought to be generally accepted that it is better to enforce integrity directly in the data base using constraints, dependencies, triggers etc rather than leaving it to specific front end programmes.
    In my view, the chief advantages - of enforcing integrity directly in the data base - are:
    (1) No process can violate the integrity.
    (2) Being server centric, these commands run on the server and so can be easily sized in one place.
    (3) One time data migration (imports) into the system using Oracle tools or SQL commands would also need to conform to the integrity constraints. Thus an implementor would be assured that the basic data is alright.
    I am faced with a situation where we are about to implement a new Oracle based package. During data migration, when we discovered that there are no integrity constraints built into the target data base, the package vendor asserted that it is not necessary to build in integrity into the database. This seems to be an extreme and risky view.
    Further, it is argued by the package vendor that putting constraints directly in the data base would significantly increase the needed resources (RAM) on the server. In my view, this increase is trivial and anyway, hardware costs are crashing day by day.
    In the absence of integrity checks in the data base, it seems to me that every program would have to extra zero value work to ensure integrity of the end user data. It will still never be complete.
    I would like to know the pros and cons of implementing without integrity constraints.
    OK.

    AnanthaP wrote:
    Full subject : Enforcing integrity using oracle specific data base commands .v. using front end.
    It ought to be generally accepted that it is better to enforce integrity directly in the data base using constraints, dependencies, triggers etc rather than leaving it to specific front end programmes.
    In my view, the chief advantages - of enforcing integrity directly in the data base - are:
    (1) No process can violate the integrity.
    (2) Being server centric, these commands run on the server and so can be easily sized in one place.
    (3) One time data migration (imports) into the system using Oracle tools or SQL commands would also need to conform to the integrity constraints. Thus an implementor would be assured that the basic data is alright.
    I am faced with a situation where we are about to implement a new Oracle based package. During data migration, when we discovered that there are no integrity constraints built into the target data base, the package vendor asserted that it is not necessary to build in integrity into the database. This seems to be an extreme and risky view.
    Further, it is argued by the package vendor that putting constraints directly in the data base would significantly increase the needed resources (RAM) on the server. In my view, this increase is trivial and anyway, hardware costs are crashing day by day.
    In the absence of integrity checks in the data base, it seems to me that every program would have to extra zero value work to ensure integrity of the end user data. It will still never be complete.
    I would like to know the pros and cons of implementing without integrity constraints.
    OK.It's a shame you seem to be so far into the process and committed to this vendor. I once had a vendor tell us his product would run on Oracle but they recommended MS SQL Server because "oracle can't handle more than 5 concurrent sessions." I made sure that vendor didn't make the short list.

  • Set restriction to get messages older than a specific date using EWS?

    Hi, I'm new working with EWS and I have found some code to set a restriction to help me filter messages "between" to dates, but I need to be able to obtain emails that are "older" than a specific date so I can delete them.  My application needs to be
    able to clean up a mailbox to only to keep up to so many days/weeks worth of emails.  The code I have to search for emails that works
    between dates is:
    public void SetDateRestriction(FindItemType findItemRequest)
    DateTime dateStart = DateTime.Now;
    DateTime dateEnd = DateTime.Today.AddDays(-180);
    PathToUnindexedFieldType dateSentPath = new PathToUnindexedFieldType();
    dateSentPath.FieldURI = UnindexedFieldURIType.itemDateTimeSent;
    IsGreaterThanOrEqualToType isGreaterThanOrEqual = new IsGreaterThanOrEqualToType();
    isGreaterThanOrEqual.Item = dateSentPath;
    FieldURIOrConstantType dateConstant = new FieldURIOrConstantType();
    ConstantValueType dateConstantValue = new ConstantValueType();
    //dateConstantValue.Value = string.Format("{0}-{1}-{2}T00:00:00Z", dateStart.Year.ToString(), dateStart.Month.ToString(), dateStart.Day.ToString());
    dateConstantValue.Value = dateStart.ToUniversalTime().ToString("yyyy-MM-ddT00:00:00Z");
    dateConstant.Item = dateConstantValue;
    isGreaterThanOrEqual.FieldURIOrConstant = dateConstant;
    // less than or equal to
    PathToUnindexedFieldType dateSentPath1 = new PathToUnindexedFieldType();
    dateSentPath1.FieldURI = UnindexedFieldURIType.itemDateTimeSent;
    IsLessThanOrEqualToType lessThanOrEqualTo = new IsLessThanOrEqualToType();
    lessThanOrEqualTo.Item = dateSentPath1;
    FieldURIOrConstantType dateConstant1 = new FieldURIOrConstantType();
    ConstantValueType dateConstantValue1 = new ConstantValueType();
    //dateConstantValue1.Value = string.Format("{0}-{1}-{2}T00:00:00Z", dateEnd.Year.ToString(), dateEnd.Month.ToString(), dateEnd.Day.ToString());
    dateConstantValue1.Value = dateEnd.ToUniversalTime().ToString("yyyy-MM-ddT00:00:00Z");
    dateConstant1.Item = dateConstantValue1;
    lessThanOrEqualTo.FieldURIOrConstant = dateConstant1;
    RestrictionType restriction = new RestrictionType();
    AndType andType = new AndType();
    andType.Items = new SearchExpressionType[] { lessThanOrEqualTo , isGreaterThanOrEqual };
    restriction.Item = andType;
    findItemRequest.Restriction = restriction;
    How can I modify this to give me emails older than a specific date?  Also any input on doing the deleting would also be greatly appreciated!
    Best Regards,
    Nelson

    Thank you very much Glen!  I can't believe it was that easy.  Works perfect.  I will work on the DeleteItemType.  I was also wondering if there was a way to detect if an email is an Out-of-Office (OOO) or Undeliverable message? 
    My app goes through all the emails and I need to skip any that are OOO or Undeliverable messages.  I've learned how to use the SearchFilter and I find that easier and more straight forward than setting the restrictions.  I'm also able to use a SearchFilter
    to get emails older than the specified date. 
    Here's my new code for reading unread emails using the SearchFilter and paging (for better efficiency). 
    1 - Is there anyway to tell the filter to get AllProperties without having to specify each one using ItemSchema or EmailMessageSchema?
    2- What is the difference between those two (any advantages over one or the other)?
    3 -Is there any reason to even be using the ExchangeServiceBinding here anymore since it seems the ExchangeService object is all I need?  In my old code, I was using the ExchangeService object to do an AutoDiscover and then using that to set the URL
    in the ExchangeServiceBinding.  What are the main to difference/uses for these to objects?  Sorry if that's a dumb question.
    private void GetUnReadEmails(string sMailboxName)
    // Create the binding to Exchange
    ExchangeServiceBinding esb = new ExchangeServiceBinding();
    esb.RequestServerVersionValue = new RequestServerVersion();
    //Console.WriteLine("Exchange version: " + esb.RequestServerVersionValue.Version);
    //esb.RequestServerVersionValue.Version = ExchangeVersionType.Exchange2007_SP1;
    ExchangeService service = GetBinding(sMailboxName);
    Console.WriteLine("Exchange version: " + service.RequestedServerVersion);
    esb.Url = service.Url.ToString();
    esb.Credentials = new NetworkCredential(Username, Password, Domain);
    Mailbox mb = new Mailbox(sMailboxName);
    FolderId fid1 = new FolderId(WellKnownFolderName.Inbox, mb); // where fid1 was WellKnownFolderName.Inbox
    //FindFoldersResults findResults = service.FindFolders(fid1, new FolderView(int.MaxValue));
    ItemView iv = new ItemView(50, 0);
    FindItemsResults<Item> findResults = null;
    PropertySet ivPropSet = new PropertySet(BasePropertySet.IdOnly);
    //SearchFilter ivSearchFilter = new SearchFilter.SearchFilterCollection(LogicalOperator.And, new SearchFilter.IsEqualTo(EmailMessageSchema.IsRead, false), new SearchFilter.ContainsSubString(ItemSchema.Subject,"MSGID:"));
    SearchFilter searchFilter = new SearchFilter.IsEqualTo(EmailMessageSchema.IsRead, false);
    iv.PropertySet = ivPropSet;
    do
    findResults = service.FindItems(fid1, searchFilter, iv);
    string sText = "";
    if(findResults.Items.Count > 0)
    //PropertySet itItemPropSet = new PropertySet(BasePropertySet.IdOnly) { EmailMessageSchema.Body };
    PropertySet itemPropSet = new PropertySet(BasePropertySet.IdOnly) {ItemSchema.Subject, ItemSchema.Body, ItemSchema.DateTimeReceived, ItemSchema.HasAttachments, ItemSchema.ItemClass};
    itemPropSet.RequestedBodyType = BodyType.Text;
    service.LoadPropertiesForItems(findResults.Items, itemPropSet);
    //service.LoadPropertiesForItems(findResults.Items, itItemPropSet);
    //service.LoadPropertiesForItems(findResults, new PropertySet(ItemSchema.Subject, ItemSchema.Body, ItemSchema.DateTimeReceived));
    Console.WriteLine("Total items: " + findResults.TotalCount);
    foreach (Item item in findResults.Items)
    Console.WriteLine("ItemID: " + item.Id.UniqueId);
    Console.WriteLine("Subject: " + item.Subject);
    Console.WriteLine("Received date: " + item.DateTimeReceived);
    Console.WriteLine("Body: " + item.Body);
    Console.WriteLine("Has attachments: " + item.HasAttachments);
    //Mark the email as read
    EmailMessage emEmail = EmailMessage.Bind(service, item.Id);
    //emEmail.Load();
    emEmail.IsRead = true;
    emEmail.Update(ConflictResolutionMode.AlwaysOverwrite);
    sText += "Subject: " + (item.Subject.Trim()) + " ";
    //sText += "DisplayTo: " + (item.DisplayTo.Trim()) + " ";
    sText += "DateTimeReceived: " + (item.DateTimeReceived.ToString()) + " ";
    sText += "ItemClass: " + (item.ItemClass.Trim()) + " ";
    sText += "\r\n";
    else
    sText = "No unread emails";
    Console.WriteLine("No unread emails");
    txtItems.Text = sText;
    iv.Offset += findResults.Items.Count;
    } while (findResults.MoreAvailable == true);
    Thanks again for your time.  I appreciate it.
    Nelson

  • Find the used space in datafile on any specific date

    Hi,
    We have a datafile created on 2-11-2009.
    I wish to find the used space in that datafile on 13-12-2009.
    From v$datafile and dba_data_files from the "BYTES" column it wld give me the current used space in the datafile.
    But how to find the used space in datafile on any specific date.
    Thanks.

    Hello,
    It depends on your Oracle release, starting with Oracle 10.1, you have the View dba_hist_tpspc_space_usage which may give you an history of the space used by the Tablespaces:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/statviews_3195.htm#I1023456
    You may have to pay for specific license to be allowed to access to this View which belongs to AWR (Automatic Workload Repository).
    Hope this help.
    Best regards,
    Jean-Valentin

Maybe you are looking for

  • Cannot print to Brother MFC 9840 CDW or HP LaserJet 5si

    I have spent a good deal of time online and the phone with Brother and they consider this an 'apple' problem because it occurred after snow leopard was installed. I have updated everything that I think I could, but when I try to print to my printer f

  • Monitoring proxy abap in business process monitoring

    Hi Expert. Iu2019m working with Business Process Monitoring in my Solution Manager 7.0 sp20. I would to monitor the communication between SAP ECC and SAP PI. SAP ECC and SAP PI communicate by proxy. I would know if can I monitor the proxy abap using

  • How to create 3 levels of Top Level Navigation?

    Hi I have a requirement to create 3 levels of TLN in this manner: Role A RoleAFolderA   |   RoleAFolderB RoleAFolderAViewA   |   RoleAFolderBViewB In the help, it says I can have only 2 levels max. How can I configure the TLN to display the third lev

  • Attachments appear missing in CRM 7.0 after Upgrade from 6.0

    We use CRM for SSMS functionality to create help desk tickets which can have documents attached to them which are stored in and retrieved from content server. All was well in the land of CRM 6.0. Since the upgrade to CRM 7.0, when viewing an SSMS tic

  • How to execute the packaged procedure

    Hello i've written the following package: It's created fine but while running that procedure i'm getting the following error create or replace package ttt_example as   TYPE ColumnsInfo IS RECORD (       columnName VARCHAR2 (30),       dataType VARCHA