SDO_AGGR_LRS_CONCAT Limitations

We are running into some problems trying to concatenate geometries which have more than a sum of 1000 vertices. We are trying to use "simplify" to reduce the number, but are wondering if this is a known problem with the LRS concat function.
Any guesses?

Hi,
  I found the problem - yes, the function requires measure value in the geometry.
I had used the spatial function SDO_LRS.CONVERT_TO_LRS_GEOM to convert to insert the measure values and then use the result set to concatenate LRS segments.
Thank you,
L.

Similar Messages

  • Memory limitation on T61

    I have the following Lenovo notebook :
    Product: ThinkPad T61 8898-55G [change]
    Operating system: All [change]
    Original description: T7100(1.8GHz), 1GB RAM, 120GB 5400rpm HD, 14.1in 1024x768 LCD, Intel X3100, CDRW/DVDRW, Intel 802.11abg wireless, Bluetooth, Modem, 1Gb Ethernet, UltraNav, Secure chip, Fingerprint reader, 4c Li-Ion, WinVista Business 32
    I configured my Notebook with a Dual-boot Windowx XP Pro SP3 and Windows Vista Bussines SP1 (32 bit)
    I have 2x 512 MB PC2-5300 667 MHz DDR2 memory inside.
    I read on the Lenovo site :
     Memory Compatibility
    (**) Windows Vista supports up to 4GB maximum memory (32-bit versions of Windows Vista cannot support 4GB). Windows XP supports up to 3GB maximum memory
    I like to update my memory with
    2x 40Y7734 1 GB PC2-5300 667 MHz DDR2 
    or
    2x 40Y7735 2 GB PC2-5300 667 MHz DDR2
    I like to know if I choose for 2x 2GB , will :
    Windows XP SP3 work just fine only it works with 3GB instead of 4GB installed memory
    Window Vista SP1 (32-bit) with the 4GB (even not supported) work stable ?
    Because the prices of 1 GB and 2 GB won't change much nowadays.
    I prefer to use 2x 2GB modules, but I like to know will it work (even with the memory limitation) stable on XP or on Vista
    or it is better/safer to use 2x 1GB modules
    Can anybody help/explain me or advice me ? 
    Electronic
    Message Edited by electronic on 01-16-2009 09:51 PM

    You're better off with 2x2GB, you may decide to run a 64-bit OS someday...and you will notice a difference between 2GB and 3GB when running a 32-bit OS.
    Cheers,
    George
    In daily use: R60F, R500F, T61, T410
    Collecting dust: T60
    Enjoying retirement: A31p, T42p,
    Non-ThinkPads: Panasonic CF-31 & CF-52, HP 8760W
    Starting Thursday, 08/14/2014 I'll be away from the forums until further notice. Please do NOT send private messages since I won't be able to read them. Thank you.

  • Report for Qty Contract and Value Contract with PO release exceeding limits

    Hi All,
    Is there a std report in SAP that the users can use to view Qty and Val Contracts that has exceed in Qty (in case of Qty Contracts) or Val (in case of Val Contracts) ?
    Thanks in advance!

    hi Duke,
    If thinking logically, then there is no report for the same..this may be because you enter the qty or value limits in the contract doc itself....So, when you make the PO for the same, and if the qty or value exceeds the system automatically provides the message..stating that the qty or value has exceeded....
    So, there is no report for the same...
    Hope it helps...
    Regards
    Priyanka.P

  • IPhone limited to 130 apps at a time! 6,200 apps available in app store

    The i phone is limited to nine app screen i notice this when i try to move on to a tenth screen and it would not let me and also when you download a app after all 9 screen are full the app dose not show up but the app store says it is installed apple needs to expand the amount of screen allowed. you then end up having to delete 2 apps download 1 app in able to make the invisible app appear you have to try this one out for your self so you get a better idea of what i mean so it looks like we are limited to 144 apps per iPhone subtract 14 apps that come default on the screen not including the 4 that are on the bar below you are left with 130 apps that can be downloaded and use at a time per i phone that ***** there are over 6,200 apps available in the app store as i type this to you and apple is limiting me to only carry 130 at a time there is somthing worng with this and i think somthing should be done!

    No acutally what that means is most people like myself have to install 30 apps just to get the iphone to do half the things it should have done out of the box. Sure some are wants but most are "need" in order for it to do the things my old palm treo 600 could do, and still there's not copy and past and no video. On top of that they have a stupid 130 limit. I'd love to hear why that is.

  • Writing to file limiting system performanc​e

    Hello,
    I really could use some help with my VI in terms of writing data.  I’ve had a LOT of help optimizing my code and am trying to enhance the performance in terms of data acquisition.  However, it seems as though writing to a data file is really limiting the frequency I can sample at.  I’ve done some research and understand that writing data at every iteration of the while loop and the build array function slows things down.  How would I modify the code so that the array buffer would store maybe 5000 data points before writing to a file, then clearing the array?  That would keep the array size small, as well as reduce the number of times the program is performing the write to file function.  Is there a better way of doing this?  I’m open to any other ideas as well.    
    I am taking data from 14 channels, and would like to sample at 1 kHz each.  The task right now is created within Measurement and Automation Explorer, and the number of samples is at 100.  I also use a buffer indicator, which will generally grow out of control, no matter how much I modify the number of samples and the frequency.  The length of my test can last upwards to 6 hours, so it needs to work that long without crashing.     
    The code and attached subVI’s is attached.  Hopefully it's all there.
    Thanks for your help,
    Alex
    Attachments:
    Test Program.zip ‏295 KB

    Lynn,
    Yea, I'll have to keep on the block diagram size in the future.  Can get unwieldy.
    I tried incorporating the Recent History Buffer example into my code.  I did have a few hang ups, which are giving me some trouble.  Mostly, how do I connect my waveform data to the Buffer VI?  Will I be able to have all my channels connect to this?  Also, ultimately I will have two write to file VI's.  Can the buffer differentiate between which file to write?
    Thanks,
    Alex
    Attachments:
    Instrument Panel V1.1 (Labview 8.0).vi ‏159 KB

  • HT3705 Has anyone any ideas why a Pages file v09 exported from v5 bloats from around 212kb to 5.9mb? I had to export existing template files back from the latest version to v09 due to limited features in latest version.

    Recently updated Pages because of Mavericks update, quickly discovered how limited new version is, exported altered files back to Pages 09 format and re-opened using earlier Pages. In amending to re-save as templates again noticed a 212kb file has bloated to 5.9mb.
    Has anyone any ideas or experience as it will not take long to fill up a 1TB drive at this rate! - I had left behind PC's and MS Office and was reasobaly happy using Pages for business, but it looks like Office for Mac is now going to b eneeded.

    Yes, I opened them all with 5 then re-saved as v5 templates. Then realised other problems with v5 so exported them all back v09 as xxx.pages files. Used some with v09 and noticed they had all increased.
    I just checked again and noticed that the initial v5 saved templates had typically increased from 412kb to 1.1MB (they do have 4 graphics) - then after exporting the templates are 5.9MB in v09 an dthe files are still 412kb.
    I had a similar problem many years ago using RTF files with Word and they kept bloating everytime you edited and saved due to saved or linked graphics?

  • Session variable size limitation (LV Webservices)

    Hi community,
    I read couple dozen email addresses from an XML and trying to write them into a session variable. The email addresses are comma separated and have a total string length of about 1100 characters. When I try to write it into a session variable LabVIEW drops an error message (-67158).
    It is very clearly related to the size of the string as if use lets say only 200 characters I dont receive the error message.
    How can I get rid of this limitation?
    Thanks!

    I am writing a general purpose webpage where I need email notifications. I have the workaround ready (before I send out the emails I dont read the emails from a session, but using the userID stored in the session to read the email from the xml). But generally having this limitation is annoying and unnecessary as normally you easily can store 100kB in one session. (probably even more, but that was the max I have ever did)

  • Maintain tolerance limits in the Tolerance key

    Hi,
    Could any one help me how to maintain these Tolerance limits in the Tolerance key. This is an error I receive  while creating a PO.
    Best Regards,
    Sridhar.k

    What is the Error Mesaage no you are getting??
    Solution is as Follows
    Set Tolerance Limits for Price Variance
    In this step, you define the tolerance limits for price variances.
    When processing a purchase order, the system checks whether the effective price of a PO item shows variances compared with the valuation price stored in the material master record. In addition, it checks whether the specified cash discount value is admissible.
    Variances are allowed within the framework of tolerance limits. If a variance exceeds a tolerance limit, the system issues a warning or error message.
    In the SAP System, the types of variance are represented by the tolerance keys. For each tolerance key, you can define percentage and value-dependent upper and lower limits per company code.
    Standard settings
    The standard SAP System supplied contains the following tolerance keys:
    PE Price variance, Purchasing
    Tolerance limit for system message no. 207. This message appears if the specified effective price exceeds the predefined tolerances when compared with the material price.
    SE Maximum cash discount deduction, Purchasing
    Tolerance limit for system message no. 231. This is a warning message, which appears when the specified cash discount percentage exceeds the predefined tolerances.
    Note
    You can specify whether the system message appears as a warning or error message using the menu options <b>Environment -> Define Attributes of System Messages.</b>
    Activities
    Maintain the tolerance limits for each tolerance key per company code
    Regards
    Biswajit

  • I have a new Windows tablet computer with limited hard drive space, and cannot transfer my itunes library to the hard drive. Can I run t media from an external hard drive? If so, how do I transfer my files?

    I have a new Windows tablet computer with limited hard drive space (32 GB), and cannot transfer my itunes library to the hard drive. Can I run itunes from an external hard drive? I have tried to follow some of the directions on this site, but am having no success. Thanks.

    iTunes will run fine with the media on an external drive.
    However, I suggest that a tablet computer with a tiny hard drive is not ideal as the primary computer for managing an iTunes library, even a small one. If you have another machine, perhaps a big, boxy, inexpensive old desktop with a decent amount of storage, that might be a better choice.
    http://support.apple.com/kb/HT1364

  • Final user's can not see the data due to limited authorization.

    We have created a InfoSet with three info Objects, 0Account, 0Costcenter and 0COMP_CODE. 0Costcenter have an attribute retail location  0RT_LOCATIO.
    0RT_LOCATIO is an authorization relevant object. We as consultants can execute the infoset properly, but final user's with limited authorizations can not see the data because of authorization failier
    We hae several options to solve the issue, deleselect the auth. flag in the infoobject; delete the infoobject from the attributes of the cost center or create an authorization object and assign it to the final user's profile. But we don't want to go that way.
    My question is, is there any way to avoid including this attribute in the infoset definition? We are not using it in the query and we don't need it, so if we could delete it from the infoset (in the same way you add or delete infoobjects from an Infocube) without changing the cost center aster data, we will have our problem solved.
    Does anyone how to do this (if possible)?
    Thanks in advance!

    Just do two things to find the authorization check failed for that user.
    1. Execute SU53 output and find out the authoirzation check failed. If yes, please send that to BASIS Team.
    2. Next one, switch on the authorization trace in ST01 and ask that user to see that data. if the user is failed with authorization issue. switch off the trace in ST01 and find out the issue.
    Do this way, if it is not successful you can go for any other alternate way.
    Hope this would help you.

  • 2GB OR NOT 2GB - FILE LIMITS IN ORACLE

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-11
    2GB OR NOT 2GB - FILE LIMITS IN ORACLE
    ======================================
    Introduction
    ~~~~~~~~~~~~
    This article describes "2Gb" issues. It gives information on why 2Gb
    is a magical number and outlines the issues you need to know about if
    you are considering using Oracle with files larger than 2Gb in size.
    It also
    looks at some other file related limits and issues.
    The article has a Unix bias as this is where most of the 2Gb issues
    arise but there is information relevant to other (non-unix)
    platforms.
    Articles giving port specific limits are listed in the last section.
    Topics covered include:
    Why is 2Gb a Special Number ?
    Why use 2Gb+ Datafiles ?
    Export and 2Gb
    SQL*Loader and 2Gb
    Oracle and other 2Gb issues
    Port Specific Information on "Large Files"
    Why is 2Gb a Special Number ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Many CPU's and system call interfaces (API's) in use today use a word
    size of 32 bits. This word size imposes limits on many operations.
    In many cases the standard API's for file operations use a 32-bit signed
    word to represent both file size and current position within a file (byte
    displacement). A 'signed' 32bit word uses the top most bit as a sign
    indicator leaving only 31 bits to represent the actual value (positive or
    negative). In hexadecimal the largest positive number that can be
    represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
    This is ONE less than 2Gb.
    Files of 2Gb or more are generally known as 'large files'. As one might
    expect problems can start to surface once you try to use the number
    2147483648 or higher in a 32bit environment. To overcome this problem
    recent versions of operating systems have defined new system calls which
    typically use 64-bit addressing for file sizes and offsets. Recent Oracle
    releases make use of these new interfaces but there are a number of issues
    one should be aware of before deciding to use 'large files'.
    What does this mean when using Oracle ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The 32bit issue affects Oracle in a number of ways. In order to use large
    files you need to have:
    1. An operating system that supports 2Gb+ files or raw devices
    2. An operating system which has an API to support I/O on 2Gb+ files
    3. A version of Oracle which uses this API
    Today most platforms support large files and have 64bit APIs for such
    files.
    Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs
    but the situation is very dependent on platform, operating system version
    and the Oracle version. In some cases 'large file' support is present by
    default, while in other cases a special patch may be required.
    At the time of writing there are some tools within Oracle which have not
    been updated to use the new API's, most notably tools like EXPORT and
    SQL*LOADER, but again the exact situation is platform and version specific.
    Why use 2Gb+ Datafiles ?
    ~~~~~~~~~~~~~~~~~~~~~~~~
    In this section we will try to summarise the advantages and disadvantages
    of using "large" files / devices for Oracle datafiles:
    Advantages of files larger than 2Gb:
    On most platforms Oracle7 supports up to 1022 datafiles.
    With files < 2Gb this limits the database size to less than 2044Gb.
    This is not an issue with Oracle8 which supports many more files.
    In reality the maximum database size would be less than 2044Gb due
    to maintaining separate data in separate tablespaces. Some of these
    may be much less than 2Gb in size.
    Less files to manage for smaller databases.
    Less file handle resources required
    Disadvantages of files larger than 2Gb:
    The unit of recovery is larger. A 2Gb file may take between 15 minutes
    and 1 hour to backup / restore depending on the backup media and
    disk speeds. An 8Gb file may take 4 times as long.
    Parallelism of backup / recovery operations may be impacted.
    There may be platform specific limitations - Eg: Asynchronous IO
    operations may be serialised above the 2Gb mark.
    As handling of files above 2Gb may need patches, special configuration
    etc.. there is an increased risk involved as opposed to smaller files.
    Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.
    Important points if using files >= 2Gb
    Check with the OS Vendor to determine if large files are supported
    and how to configure for them.
    Check with the OS Vendor what the maximum file size actually is.
    Check with Oracle support if any patches or limitations apply
    on your platform , OS version and Oracle version.
    Remember to check again if you are considering upgrading either
    Oracle or the OS in case any patches are required in the release
    you are moving to.
    Make sure any operating system limits are set correctly to allow
    access to large files for all users.
    Make sure any backup scripts can also cope with large files.
    Note that there is still a limit to the maximum file size you
    can use for datafiles above 2Gb in size. The exact limit depends
    on the DB_BLOCK_SIZE of the database and the platform. On most
    platforms (Unix, NT, VMS) the limit on file size is around
    4194302*DB_BLOCK_SIZE.
    Important notes generally
    Be careful when allowing files to automatically resize. It is
    sensible to always limit the MAXSIZE for AUTOEXTEND files to less
    than 2Gb if not using 'large files', and to a sensible limit
    otherwise. Note that due to <Bug:568232> it is possible to specify
    an value of MAXSIZE larger than Oracle can cope with which may
    result in internal errors after the resize occurs. (Errors
    typically include ORA-600 [3292])
    On many platforms Oracle datafiles have an additional header
    block at the start of the file so creating a file of 2Gb actually
    requires slightly more than 2Gb of disk space. On Unix platforms
    the additional header for datafiles is usually DB_BLOCK_SIZE bytes
    but may be larger when creating datafiles on raw devices.
    2Gb related Oracle Errors:
    These are a few of the errors which may occur when a 2Gb limit
    is present. They are not in any particular order.
    ORA-01119 Error in creating datafile xxxx
    ORA-27044 unable to write header block of file
    SVR4 Error: 22: Invalid argument
    ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
    ORA-27070 skgfdisp: async read/write failed
    ORA-02237 invalid file size
    KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
    file limit exceed.
    Unix error 27, EFBIG
    Export and 2Gb
    ~~~~~~~~~~~~~~
    2Gb Export File Size
    ~~~~~~~~~~~~~~~~~~~~
    At the time of writing most versions of export use the default file
    open API when creating an export file. This means that on many platforms
    it is impossible to export a file of 2Gb or larger to a file system file.
    There are several options available to overcome 2Gb file limits with
    export such as:
    - It is generally possible to write an export > 2Gb to a raw device.
    Obviously the raw device has to be large enough to fit the entire
    export into it.
    - By exporting to a named pipe (on Unix) one can compress, zip or
    split up the output.
    See: "Quick Reference to Exporting >2Gb on Unix" <Note:30528.1>
    - One can export to tape (on most platforms)
    See "Exporting to tape on Unix systems" <Note:30428.1>
    (This article also describes in detail how to export to
    a unix pipe, remote shell etc..)
    Other 2Gb Export Issues
    ~~~~~~~~~~~~~~~~~~~~~~~
    Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
    with EXPORT on many releases of Oracle such that if you export a large table
    and specify COMPRESS=Y then it is possible for the NEXT storage clause
    of the statement in the EXPORT file to contain a size above 2Gb. This
    will cause import to fail even if IGNORE=Y is specified at import time.
    This issue is reported in <Bug:708790> and is alerted in <Note:62436.1>
    An export will typically report errors like this when it hits a 2Gb
    limit:
    . . exporting table BIGEXPORT
    EXP-00015: error on row 10660 of table BIGEXPORT,
    column MYCOL, datatype 96
    EXP-00002: error in writing to export file
    EXP-00002: error in writing to export file
    EXP-00000: Export terminated unsuccessfully
    There is a secondary issue reported in <Bug:185855> which indicates that
    a full database export generates a CREATE TABLESPACE command with the
    file size specified in BYTES. If the filesize is above 2Gb this may
    cause an ORA-2237 error when attempting to create the file on IMPORT.
    This issue can be worked around be creating the tablespace prior to
    importing by specifying the file size in 'M' instead of in bytes.
    <Bug:490837> indicates a similar problem.
    Export to Tape
    ~~~~~~~~~~~~~~
    The VOLSIZE parameter for export is limited to values less that 4Gb.
    On some platforms may be only 2Gb.
    This is corrected in Oracle 8i. <Bug:490190> describes this problem.
    SQL*Loader and 2Gb
    ~~~~~~~~~~~~~~~~~~
    Typically SQL*Loader will error when it attempts to open an input
    file larger than 2Gb with an error of the form:
    SQL*Loader-500: Unable to open file (bigfile.dat)
    SVR4 Error: 79: Value too large for defined data type
    The examples in <Note:30528.1> can be modified to for use with SQL*Loader
    for large input data files.
    Oracle 8.0.6 provides large file support for discard and log files in
    SQL*Loader but the maximum input data file size still varies between
    platforms. See <Bug:948460> for details of the input file limit.
    <Bug:749600> covers the maximum discard file size.
    Oracle and other 2Gb issues
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    This sections lists miscellaneous 2Gb issues:
    - From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
    An extract from the 8.0.5 README file introduces these - see <Note:62252.1>
    - DBV (the database verification file program) may not be able to scan
    datafiles larger than 2Gb reporting "DBV-100".
    This is reported in <Bug:710888>
    - "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
    specified in 'M' or 'K' to create files larger than 2Gb otherwise the
    error "ORA-02237: invalid file size" is reported. This is documented
    in <Bug:185855>.
    - Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
    Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename>
    reports
    ORA-2187: invalid quota specification.
    This is documented in <Bug:425831>.
    The workaround is to grant users UNLIMITED TABLESPACE privilege if they
    need a quota above 2Gb.
    - Tools which spool output may error if the spool file reaches 2Gb in size.
    Eg: sqlplus spool output.
    - Certain 'core' functions in Oracle tools do not support large files -
    See <Bug:749600> which is fixed in Oracle 8.0.6 and 8.1.6.
    Note that this fix is NOT in Oracle 8.1.5 nor in any patch set.
    Even with this fix there may still be large file restrictions as not
    all code uses these 'core' functions.
    Note though that <Bug:749600> covers CORE functions - some areas of code
    may still have problems.
    Eg: CORE is not used for SQL*Loader input file I/O
    - The UTL_FILE package uses the 'core' functions mentioned above and so is
    limited by 2Gb restrictions Oracle releases which do not contain this fix.
    <Package:UTL_FILE> is a PL/SQL package which allows file IO from within
    PL/SQL.
    Port Specific Information on "Large Files"
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Below are references to information on large file support for specific
    platforms. Although every effort is made to keep the information in
    these articles up-to-date it is still advisable to carefully test any
    operation which reads or writes from / to large files:
    Platform See
    ~~~~~~~~ ~~~
    AIX (RS6000 / SP) <Note:60888.1>
    HP <Note:62407.1>
    Digital Unix <Note:62426.1>
    Sequent PTX <Note:62415.1>
    Sun Solaris <Note:62409.1>
    Windows NT Maximum 4Gb files on FAT
    Theoretical 16Tb on NTFS
    ** See <Note:67421.1> before using large files
    on NT with Oracle8
    *2 There is a problem with DBVERIFY on 8.1.6
    See <Bug:1372172>

    I'm not aware of a packaged PL/SQL solution for this in Oracle 8.1.7.3 - however it is very easy to create such a program...
    Step 1
    Write a simple Java program like the one listed:
    import java.io.File;
    public class fileCheckUtl {
    public static int fileExists(String FileName) {
    File x = new File(FileName);
    if (x.exists())
    return 1;
    else return 0;
    public static void main (String args[]) {
    fileCheckUtl f = new fileCheckUtl();
    int i;
    i = f.fileExists(args[0]);
    System.out.println(i);
    Step 2 Load this into the Oracle data using LoadJava
    loadjava -verbose -resolve -user user/pw@db fileCheckUtl.java
    The output should be something like this:
    creating : source fileCheckUtl
    loading : source fileCheckUtl
    creating : fileCheckUtl
    resolving: source fileCheckUtl
    Step 3 - Create a PL/SQL wrapper for the Java Class:
    CREATE OR REPLACE FUNCTION FILE_CHECK_UTL (file_name IN VARCHAR2) RETURN NUMBER AS
    LANGUAGE JAVA
    NAME 'fileCheckUtl.fileExists(java.lang.String) return int';
    Step 4 Test it:
    SQL> select file_check_utl('f:\myjava\fileCheckUtl.java') from dual
    2 /
    FILE_CHECK_UTL('F:\MYJAVA\FILECHECKUTL.JAVA')
    1

  • Public Folder Migration from Exchange 2007 to 2013 fails RelinquishedWlmStall budget limitations

    I have been trying to migrate approx 50 public folders from Exchange 2007 to Exchange 2013 for several months.  I have followed every link I can find on how to do the migration properly.  It will stay in a queued state for hours until the MRS is
    restarted.  It will then start copying messages.  After reaching 28% I get RelinquishedWlmStall as Status Detail.  Running the Get-PublicFolderMigrationRequestStatistics -IncludeReport | fl command I get the following: (truncated)
    Status                           : InProgress
    FailureTimestamp                 :
    IsValid                          : True
    ValidationMessage                :
    OrganizationId                   :
    RequestGuid                      : ff4af794-f31e-4334-959f-7d6c4ccbe310
    RequestQueue                     : PublicFolders
    ExchangeGuid                     : b12c1f83-c4ae-46bb-a26a-0422d90800e4
    Identity                         : eddf983d-5827-4a2a-98ef-820321ebebd0\ff4af794-f31e-4334-959f-7d6c4ccbe310
    DiagnosticInfo                   :
    Report                           
                                       4/23/2015 8:49:37 AM [pionexxxx] Request processing started.
                                       4/23/2015 8:49:37 AM [pionexxxxx] Cleared sync state for request
                                       00000000-0000-0000-0000-000000000000 due to 'CleanupOrphanedMailbox'.
                                       4/23/2015 8:49:38 AM [pionexxxxxx] Stage: CreatingFolderHierarchy. Percent complete:
                                       10.
                                       4/23/2015 8:49:38 AM [pionexxxxxx] Initializing folder hierarchy from mailbox '': 50
                                       folders total.
                                       4/23/2015 8:49:38 AM [pionexxxx] Folder creation progress: 0 folders created in
                                       mailbox 'b12c1f83-c4ae-46bb-a26a-0422d90800e4'.
                                       4/23/2015 8:49:38 AM [pionexxxxx] Warning: Failed to find or link recipient object
                                       'FF-D5-EF-1A-0A-95-D7-4B-89-02-A6-9E-DA-4D-23-4B' in active directory for mail
                                       enabled public folder 'Public Root/IPM_SUBTREE/AccountingFaxes' with EntryId '00-00
                                       00-00-1A-44-73-90-AA-66-11-CD-9B-C8-00-AA-00-2F-C4-5A-03-00-3B-C3-C1-30-22-E0-BE-4E
                                       98-03-B9-14-4F-B4-12-C1-00-00-00-0F-C3-4C-00-00'. This folder can be linked
                                       manually by running Enable-MailPublicFolder cmdlet after the migration completes.
                                       4/23/2015 8:49:39 AM [pionexxxx] Warning: Failed to find or link recipient object
                                       'D8-46-09-BE-0B-3D-6E-4A-95-10-8F-C5-AA-0B-88-50' in active directory for mail
                                       enabled public folder 'Public Root/IPM_SUBTREE/AccountingFaxes/CashRec/Inventory'
                                       with EntryId '00-00-00-00-1A-44-73-90-AA-66-11-CD-9B-C8-00-AA-00-2F-C4-5A-03-00-3B-
                                       3-C1-30-22-E0-BE-4E-98-03-B9-14-4F-B4-12-C1-00-00-00-12-6A-F5-00-00'. This folder
                                       can be linked manually by running Enable-MailPublicFolder cmdlet after the
                                       migration completes.
                                       4/23/2015 8:49:39 AM [pionexxxx] Warning: Failed to find or link recipient object
                                       'A6-12-DA-05-7D-21-DE-44-90-61-64-23-8C-62-41-F4' in active directory for mail
                                       enabled public folder 'Public Root/IPM_SUBTREE/AccountingFaxes/CashRec/Ulysses'
                                       with EntryId '00-00-00-00-1A-44-73-90-AA-66-11-CD-9B-C8-00-AA-00-2F-C4-5A-03-00-3B-
                                       3-C1-30-22-E0-BE-4E-98-03-B9-14-4F-B4-12-C1-00-00-00-0F-C3-7C-00-00'. This folder
                                       can be linked manually by running Enable-MailPublicFolder cmdlet after the
                                       migration completes.
                                       4/23/2015 8:49:40 AM [pionexxxx] Warning: Failed to find or link recipient object
                                       '96-76-90-F1-A5-B7-DB-41-A7-A3-D6-8D-86-AE-45-A5' in active directory for mail
                                       enabled public folder 'Public Root/IPM_SUBTREE/CSR Schedule' with EntryId '00-00-00
                                       00-1A-44-73-90-AA-66-11-CD-9B-C8-00-AA-00-2F-C4-5A-03-00-3B-C3-C1-30-22-E0-BE-4E-98
                                       03-B9-14-4F-B4-12-C1-00-00-00-0E-34-FF-00-00'. This folder can be linked manually
                                       by running Enable-MailPublicFolder cmdlet after the migration completes.
                                       4/23/2015 8:49:40 AM [pionexxxx] Warning: Failed to find or link recipient object
                                       '82-72-47-AA-D7-68-85-4C-91-92-41-45-8A-20-EA-6D' in active directory for mail
                                       enabled public folder 'Public Root/IPM_SUBTREE/DC 24 Hr Schedule' with EntryId '00-
                                       0-00-00-1A-44-73-90-AA-66-11-CD-9B-C8-00-AA-00-2F-C4-5A-03-00-85-62-89-2A-52-C3-92-
                                       5-93-F8-5A-6F-56-5D-D1-D4-00-00-00-00-27-37-00-00'. This folder can be linked
                                       manually by running Enable-MailPublicFolder cmdlet after the migration completes.
                                       4/23/2015 8:49:41 AM [pionexxxx] Folder hierarchy initialized for mailbox
                                       'b12c1f83-c4ae-46bb-a26a-0422d90800e4': 46 folders created.
                                       4/23/2015 8:49:41 AM [pionexxxx] Stage: CreatingInitialSyncCheckpoint. Percent
                                       complete: 15.
                                       4/23/2015 8:49:41 AM [pionexxxx] Initial sync checkpoint progress: 0/50 folders
                                       processed. Currently processing mailbox 'b12c1f83-c4ae-46bb-a26a-0422d90800e4'.
                                       4/23/2015 8:49:42 AM [pionexxxx] Initial sync checkpoint completed: 46 folders
                                       processed.
                                       4/23/2015 8:49:42 AM [pionexxxx] Stage: LoadingMessages. Percent complete: 20.
                                       4/23/2015 8:49:43 AM [pionexxxx] Messages have been enumerated successfully. 40607
                                       items loaded. Total size: 149 MB (156,253,810 bytes).
                                       4/23/2015 8:49:43 AM [pionexxxx] Stage: CopyingMessages. Percent complete: 25.
                                       4/23/2015 8:49:43 AM [pionexxxx] Copy progress: 0/40607 messages, 0 B (0
                                       bytes)/149 MB (156,253,810 bytes), 11/50 folders completed.
                                       4/23/2015 9:04:37 AM [pionexxxx] Stage: CopyingMessages. Percent complete: 28.
                                       4/23/2015 9:04:37 AM [pionexxxxx] Copy progress: 3660/40607 messages, 7.015 MB
                                       (7,355,409 bytes)/149 MB (156,253,810 bytes), 11/50 folders completed.
                                       4/23/2015 9:04:37 AM [pionexxxx] Relinquishing job because of large delays due to
                                       unfavorable server health or budget limitations.
    ItemsTransferred                 : 3660
    PercentComplete                  : 25
    4/23/2015 9:04:37 AM [pionexxxx] Relinquishing job because of large delays due to
    unfavorable server health or budget limitations.
    It never tries to restart.  Can anyone please help?  The server checks for health all come back ok.   I will be more than glad to include further information.  

    Hi,
    Basic on the error message, I notice that public folder migrate slow and display that “Relinquishing job because of large delays due to unfavorable server health or budget limitations” .
    If I misunderstand your concern, please do not hesitate to let me know.
    Since it is just slow and not failing, please do not stop the batch, it might connect to another server for sync.
    Meanwhile, please refer to below link so that get details about Exchange Online migration performance and best practices:
    https://technet.microsoft.com/library/dn592150(v=exchg.150).aspx
    If the issue persists, please collect the relevant migration report for further troubleshooting.
    Thanks
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Allen Wang
    TechNet Community Support

  • I am a long-term user of Lightroom as a standalone product with a perpetual licence. As a retired person on limited income, it is very disappointing to me that Adobe have introduced the 'Creative Cloud' (CC) subscription service in order for me to be able

    I am a long-term user of Lightroom as a standalone product with a perpetual licence. As a retired person on limited income, it is very disappointing to me that Adobe have introduced the 'Creative Cloud' (CC) subscription service in order for me to be able to continue upgrading this excellent product. It will be for me too expensive at the minimum cost of £9 per month. The additional services that CC brings are personally of no relevance or usefulness. Adobe should be prepared to support existing users who are, like myself, non commercial, amateur photographers by giving them the simple opportunity to upgrade to Lightroom 6 as a standalone, perpetual licence product. As a member of a camera club I know my co-members who use Lightroom are equally disappointed by this move to a subscription-only service.

    john beardsworth wrote:
    John Waller wrote:
    However, Adobe will soon introduce Cloud only features into Lightroom CC for which LR6 (perpetual license) owners will have to wait until LR7 (paid upgrade).
    That is possible, John, but it is only speculation on your part. Might, not will.
    kwdaves wrote:
    There is a "Lightroom 6" upgrade available for US $79 if you have a valid license for any of the earlier versions. From what I can tell, the only difference between Lightroom 6 Full, Lightroom 6 Upgrade and LightroomCC is in the license. The download file is the same.
    Other differences - with CC you get LrMobile/LrWeb and they throw in a free copy of Photoshop too.
    Yes, but when I bought my standalone license and clicked on the "Download" button it took me to the LightroomCC page. The downloaded file is named Lightroom 6, but in the CC app the installed program is LightroomCC (2015).

  • Turbo K7 Limited & XP 1800+ issue

    All,
     I have the K7T Turbo limited edition board with bios upgraded to the latest version (3.6) and AMD 1.4GHz. I would like to upgrade the CPU to XP 1800+ but the system will not boot, it hung before POST. Would some kind souls give me a pointer how to overcome this problem...
    Thanks in advance...  

    Hello,
     Not sure if I understand your advise. I have searched the forum and found quite a few people claiming successfully running K7T LE with XP1800+ but no info for setting up.
    What I did was :
     Boot the system with  the old CPU 1.4GHz Athlon
     Get to the BIOS setting ( my bios is the latest ver 3.6)
     Set the CPU Vcore voltage to 1.75, CPU Vio to 3.3 and the CPU clock ratio to 11.5 as suggested in the CPU supported page.
     FSB is already set to 133
     Save the changes.
     Shutdown
     Physically change the CPU to XP 1800+
     Boot the system
     Nothing happened.
     4 LEDs stay red.
    Also, did the same above steps but let CPU core voltage
    and ratio clock to Default.  Same result...
    Is there any other hints ???
     

  • K7T Turbo-R Limited Edition W/Raid

    Two questions,
    first how do you tell if your XP 1800+ is a palomino? I purchased one but all it says is AMD Athlon XP 1800+ on the factory sticker.
    Second when I use this processor with my K7T Turbo-R Limited W/Raid it will boot up fine, displays the correct processor speed (shows XP 1800+ in post) but the pc will at random times out of the blue just re-boot. I have a 400 watt psu AMD approved.  and my memory is good. I've done all the bios updates also. Is it possible that this is an unsupported cpu?

    http://www.amdboard.com/amdid.html
    http://www.overclockers.com/tips00173/

Maybe you are looking for