Performance degradation in Data Integrator Designer with Multi-user mode

Post Author: ccastrillon
CA Forum: Data Integration
We are developing an information system based on a DataMart populated with data using ETL processes built with Data Integrator. We are three people developing and when we work at the same time we begin to have problem of performance with Designer. Designer seems freeze sometimes and development process becomes painful.
Where is the problem? accessing repository? Is any known bug?
Job Server? but it happens even when we don't launch any job. Only building ETL processes manipulating objects (Dataflows, workflows,...etc).
We would appreciate  any help. Thanks in advance.
Carlos Castrilló

Post Author: bhofmans
CA Forum: Data Integration
What do you mean with 'working at the same time' ? You need 3 different repositories if you want to work with 3 developers, so there should be no impact at all when working simultaniously...
-Ben.

Similar Messages

  • Running Designer in multi-user mode on Citrix

    We have multiple developers accessing DS 3.1 Designer via Citrix.  Each time a developer logs in and launches Designer, they see the previous developers db log in information.  We have since implemented the RunDSDesignerMU.bat script, but we're still having the same issue.  Any ideas?

    I think this is not possible as debugger attaches to the java process over a
    specific port specified during wls startup.
    Now if a debugger session is already active (read using the port) how can
    others attach to the same port
    Wondering why your are encountering this limitation. Ideally debugging is
    done on development environments where each developer has a dedicated instance
    to play with. Even in the event you go for a single installation, you could
    create multiple server instance, one for each developer
    Thoughts
    anand
    In article <3f0c3b90$[email protected]>, Dan wrote:
    >
    The problem we are facing is that starting WebLogic in the debug mode only allows
    a single user to debug at a time. It does not allow multiple users to remotely
    debug. Also, when a user is debugging, the WebLogic instance cannot be used by
    other users - even for normal execution.
    Supporting Information:
    1. We have installed BEA WebLogic 7.0 on a Dual Xeon Processor machine with 1GB
    RAM and deployed our J2EE application on this server.
    2. The development is being done using JBuilder 8.0 Enterprise edition. All team
    members have JBuilder 8.0 installed on their local workstations.
    3. When we need to debug the application, we have to start the WebLogic server
    using "StartRemoteWebLogic". This allows us to remotely debug the application
    from our local workstations.

  • How does Essbase deals with multi-users

    How does Essbase deals with multi-users connecting for reading and writing in the same time? How the global coherence of the database is controlled when users are updating dimension and data all together?

    For reading, Essbase handles multiple requests concurrently. For metadata (outline) only one user at a time can make and save changes.For data, Essbase uses a block level locking mechanism, which implies that an update to a block can only be done by a single update request, but other blocks in the database are still updateable by other users.Hope that helps.Regards,Jade---------------------------------Jade ColeSenior Business Intelligence ConsultantClarity [email protected]

  • Subscription cannot be used with multi-user device...

    I just received an email message stating "Subscription cannot be used with multi-user devices" and "Here's a quick reminder that Skype's subscriptions are for individual use only and cannot be used with multi-user devices such as PBXs"
    I'm not sure why i received it, I am the only user and I only call out via my Iphone wifi and, occasionally, my PC.
    Could anyone please explain this to me?
    Thanks - Steve

    Hi Steve,
    Basically as long as you don't make subscription calls from more than one device at the same time there shouldn't be an issue.
    I'll look into this messaging as it seems to be bit misleading.
    Could you check your email, I sent you an email asking bit more information around this notification.
    Andre
    If answer was helpful please mark it with Kudos and if issue is resolved mark it with solution. This will help other users find this answer more easily. Thanks in advance!

  • How to start Integration server with the user XISUPER

    Hi,
    How to start Integration server with the user XISUPER?
    Regards,

    Mahesh,
    In the post installation document, under Creating User XISUPER,
    I see the following,
    1.You must now log on to the Integration server host with the user XISUPER, to switch the initial password to a valid password.
    2.You must restart the J2EE engine to transfer the user creation to the J2EE immediately.
    I am unable to proceed here.Can you please help me to resume the post installation steps ?
    Thanks,

  • 4.3 Multi User Mode; Where?

    According to Samsungs Official Site and the 4.3 Upgrade it talks about having us have "Multi User Mode"  Which means you can pass your smartphone to someone else, they can log in and have access to all their stuff (emails Im thinking)
    But I cant seem to find it on my cell phone.  How bout you?
    So then a Samsung Agent said it wasnt currently available on my device and gave me a list of what did change.
    It said "Updated Interface: Home/Apps/Lock/Notification Panel" So I asked....what changed on Lock screen...perhaps the Multi User ability?
    She said "No, a signature lock screen" I said "Hmmm" Yeah you have the ability to use your signature to lock unlock your device.
    I said ....."I didnt get that update"
    So I'm here asking the rest of you...how do I access the Multi Users function...where is it and also did anyone else get a "signature" option for lock screen?
    Thanks

    Sorry it took so long to respond; been busy.
    Okay well what I viewed was off of Samsung's website so I know Verizon has nothing to do with it; just kinda sucks that its advertised and I'm mis-informed by the creator of my phone LOL
    Providing Verizon will let me provide a link to a 3rd party site heres the link I got the info off of:
    http://m.samsung.com/us/article/premium-suite-update-for-galaxy-devices?marsLinkCategory=mphone:mphone&MKM_RID=0130085086&MKM_MID=4289732&CID=eml-mb-cp-1013-499
    Note at the very top of that page (link I provided) it says this:  Our latest software update opens up a new world for your Galaxy S® 4, Galaxy Note® II and Galaxy S® III with Android 4.3, Galaxy Gear compatibility and an emphasis on security and privacy (no mention of the word Tablet, unless one of those words is implied?....myself I own a "Galaxy Note 10.1" tablet)

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • Data integrator designer login issue

    Post Author: jeffrey
    CA Forum: Data Integration
    Why causes this data integrator desisgner login problem ? "LOGON EXCEPTION BODI-1112170: Cannot connect to repository. Please correct and retry."
    What is the possible solution for this error.
    Thank you.

    Post Author: bhofmans
    CA Forum: Data Integration
    There seems to be something wrong with the connection to your repository.
    To test the connectivity you can use the Repository Manager (installed on the same machine as your DI Designer), fill in the repository connection parameters and click 'Get Version'. This will check the connectivity and show you the version of your repository.
    Possible causes for this error :
    Database middleware not installed on the machine (which database type are you connecting too) ?
    Database middleware not configured to connect to your repo (e.g.  for Oracle the instance must be defined in tnsnames.ora)

  • Link to download Data Integrator Designer 11.7

    Can someone tell me how I go about downloading Data Integrator 11.7 Designer?  I have the jobserver download but need to update designer according to the jobserver instructions.
    Thanks.

    Hi Ian,
    DI 11.7.x does not support Vista, nor does it support 64-bit Windows.  A forthcoming major release of Data Services 12.x is tentatively scheduled to support these sometime in 2009...I apologize that I cannot be more specific, as we do not as a company comment on future features or release dates.
    Many thanks,
    ~Scott

  • Data Uploads done with one User Id not visible to other users in SPM

    Hi,
    Data uploads were successfully carried out by one of the SPM users. However, other users (with different user id) are not able to see anything in the Data Upload Summary screen.
    Is there a restriction on the visibility of Data Upload Summary for data uploads carried out with one User id to other users in SPM? A similar behaviour is observed for other screens within the Data upload workbench.
    Incase this is not the expected behaviour, it would be great if you could please provide pointers to possible reasons for this.
    Just for your information, all users have been granted same privileges in the SPM application.
    Thanks in advance.
    Regards,
    Ashish Sharma

    Hi Ashish,
    No this is not the expected behavior. We have seen this issue for other customers in the past but the reason has always turned out to be role related.
    Can you ensure that the required SPM roles are assigned to the user who do not see the DM data both in ABAP as well as portal.
    Thanks,
    Divyesh

  • Video display errors with multi-user

    Macbook Pro  or Mavericks bug
    10.9.3, MacBook Pro (Late 2010), MacBookPro6,2 
    When using the multi-user switching, the background image is distorted.
    To recreate:
    Establish user "B" with a background image on screen.  I have three Spaces with three different images.  All users logged out.
    Login as user "A"
    Login to user "B" using the Menubar to select the other user without logging out of user "A"
    Note bar on right-hand side of screen and distorted colors.
    User A has one of the Apple background photographs
    User B has personal photographs from 8 MP camera
    If user B is logged in from the Login Window then all is OK.  Display issue only shows up when logging in from other user via fast user switching (or whatever that is called).  If user B logs out and logs in again from the Login Window all is OK.

    Can you post screenshots of the distortion, someone may recognize it.
    What you describe sounds like one user as something running that doesn't work well with FUS (I think it's still called 'fast user switching' ).
    You can disable all login items by holding alt as you click the login button. See if that has any effect. If it does you know you need to look at the users login items (in System Preferences > Users & groups). In not sure if the 'alt trick' works inside the fast user switching menu, so you may want to verify the flaw exists when going to the login window, then reboot & try the 'alt trick' with the same steps.
    The other thing to try is using a background image that has been resaved via Preview's export (or another app). It's possible the file itself is a reason for the issue, do standard backgrounds do the same?

  • Applications with multi-users?

    Exist there downloadable example apllications witch use multi-users managed by sessions Beans.

    Thanks for your answer.
    Another questions...
    I have problems with my multi-users application (like you saw it). In fact, sessions mix between them.
    1) Do you know if there are any configuration for sessionBean to accept many users?
    2) Do you know if it's possible that tomcat or sun apllication server mix the sessions?
    Thanks

  • Z4: Word and OneDrive don't work with Multi-User

    Hi,We are sharing the Z4 tablet among 4 in my family. For this I'm using the Android multiple users.The pre-installed MS Word worked for the main user or owner perfectly. Then I tried to install it for the second and third and fourth user. In all cases after starting Word for these users the message came up "download of additional content required". When clicking on "download" the app crashes. Then recently an update of Word was delivered via Google Play. Now the same happens for the main user. Deinstalling for all four users and reinstalling from Google Play didn't solve the issue.For MS OneDrive the issue is: it works for the main user. For the other users the Playstore gives the error message: "another user on this devices installed an incompatible version of OneDrive. Deinstall this first". Deinstalling and reinstalling didn't helped that issue either. Only one of the four users can use OneDrive. Any ideas on how to solve this? I do have the BKB50 and Word was my main reason to go for that combination with the Z4.Regards,Zampano

    Microsoft Office for Android email support replied only with a standardized email. Beside of pointing to some FAQ it just offers to call the support hotline. I'll have to wait for the office hours then. In the meantime: has anyone tried using Word in a Multi-User setup on the Z4? Maybe it's just an issue here at me...Thx and Best Regards Zampano 

  • Data back up from Single User Mode

    Hello,
    I have a problem with my 700 MHz iBook, dual USB, with OS X 10.3.9.
    Unfortunately the OS doesn't start up because of "overlapped extent allocation" problem...
    I would like to re-install the complete OS but before doing it, I would like to back up some directories.
    Is it possible to save some data, by connecting an Ipod or some sort of external peripheral, from the Single User mode?
    If yes, how can I do it?
    Thank you in advance from your kind support.

    Before giving up the ghost check out these:
    http://docs.info.apple.com/article.html?artnum=25770.
    Manually fix Overlapped Extent Allocation Errors without Disk Warrior
    Overlapped overlapped extent allocation errors can be the bane of any Mac user's existence. Often, these errors go unnoticed until the problem becomes visible: your Mac might refuse to boot, crash unexpected, or worse, critical data might disappear from the Finder. Disk Utility can detect, but not fix overlapped extent allocation errors, and certain third-party utilities, such as Alsoft Diskwarrior, can fix them, but generally without reporting the consequences.
    Overlapped extent allocation error occur when the file system thinks that two files are occupying the same area on the hard disk, hence overlapping on the same "inode," which is the structure which holds the location of the data blocks the file occupies, and also file permissions and flags.
    Clearing the "overlapped" or "overallocated" extent allocation essentially means that you'll have to lose some data, because the only way to remove the overlap is to delete the file that's occupying the inode. So, if you suspect, or find out, that the guilty file is a critical system file that resides in one of the hidden system directories such as /etc /var /usr/ or visible system directories such as /System or /Library, and you don't want to reinstall the whole OS (which might not fix the overlapped extent allocation anyway), it's good to have another disk available to copy the files back to your original disk if necessary: a second bootable hard drive or a firewire drive connected to your Mac when you remove the misbehaving file. Just make sure that when you copy the file back to your boot disk that the permissions are correct, so it's best to use the "ditto" command, so that all sticky bits, flags, and permissions are preserved.
    In case you didn't know, you don't have to boot from an install CD in order to check for overlapped extent allocations. All you need to do is restart your Mac, while holding down command + S to boot in "single-user mode."
    At the command prompt that appears, type:
    $ fsck -fy
    If you have an overlapped extent allocation, you'll see:
    "Overlapped Extent Allocation" (File 123456d)
    No matter how many times you run fsck -fy, you'll never be rid of the error.
    So, simply issue the following command:
    find / -inum 123456 -print
    Note the "d" was dropped, or any extra letter that appears after the inode number.
    The find will return a file name that matches with the inode number, and the path to that file. If you remove the file then the fsck will not return this error next time you run it.
    However, before you can delete the file(s) in single-user mode, you'll need to mount the file system. Type:
    $ mount -uw /
    When done, issue the "sync" command, and that will flush the write cache so that all pending writes are written from memory to the disk. Also, since most OS X 10.3 Macs use the HFS+ Journaled file system, it might be a good idea to disable the journal before booting into single-user mode by typing:
    $ sudo diskutil disableJournal /
    then re-enable it when done fixing the overlapped extents and rebooting normally:
    $ sudo diskutil enableJournal /
    Chris Anderson is a long-time Linux propellerhead who just got his first Mac, an ibook G4, and can't keep his hands off of it. He currently works as a "The Architect" and general visionary for a maker of world-class collectibles.
    If you own Disk Warrior then it should be able to repair a drive with overlapped extents.
    There are two backup utilities included in Unix - psync and rsync. You will find them in the /usr/bin/ directory. For documentation simply enter: man psync or man rsync. In order to write data while in single-user mode you need to issue the command: /sbin/mount/ -uw / (Note: there is a "space" between the "uw" and the "/".) To mount an external drive you will need to provide the mountpoint for it in place of the "/", e.g., "/Volumes/volname") without the quotes.

  • Getting Data Off Harddrive in Single User Mode

    When my PowerBook G4 running 10.3.9 boots after the white apple screen I just get a blue screen and my mouse. I am able to boot into single user mode and get to the data on my harddrive. My other computer at home is a linux box so if I could just get the network going in single user mode I could ssh all of my stuff to the other machine. Is there any way to do this. Is there any other good way to recover my data?

    Hi dsignoff,
       This is likely to be more difficult than it sounds. If the problem isn't a failed hard drive, you might have an easier time simply doing an archive-and-install. Of course any installation except a fresh install will preserve your home directory but an "update" installation stands a reasonable chance of not fixing the problem.
       Even if you do succeed in moving everything to the linux box, you will likely lose the resource forks and file metadata of the files.
       Of course even if you do try an archive-and-install, it never hurts to have some backup. I'm a bit rusty with Panther but I believe that it requires the following as a first step:
    /usr/libexec/registermach_bootstrapservers /etc/mach_init.d
    That actually starts quite a bit of the system, possibly including the part that is causing Aqua to fail to start up. (meaning that it too could fail) However, if the above command succeeds, the next step is to execute:
    /sbin/SystemStarter start Network
    You may also need:
    /sbin/SystemStarter start NetworkExtensions
    If all of that works, you should have enough services available to ssh to the Linux box and move files. Of course there is a chance that if you find the error messages of the process that is failing in the /var/log/system.log, we could actually recommend a fix for the system.
    Gary
    ~~~~
       "The wages of sin are death; but after they're
       done taking out taxes, it's just a tired feeling:"

Maybe you are looking for

  • How to hide or reorder mailboxes on iPhone 6?

    Hi I have a lot of mailboxes with subfolders I don't want to see on my iPhone 6 so I'm trying to work out how to hide and reorder them. I can see how to do it in iOS 7 but it seems to have changed? Can I do it from the phone? Help please! Rod

  • How to deal with Flex Exceptions ??

    Hi , I know how to handle an Exception when dealing with Flex with server side that is , when an Exception comes from the server side (java) i can easily handle that on the fault event with the help of ErrorMessage . But can anybody please let me kno

  • DVD movies on to Itunes

    Can I put my DVD movies on to Itunes to move to my Iphone? like with cds (not home movies) compaq   Windows XP  

  • Getting a quicktime stream to open in quicktime, not browser

    Hi all, I am trying to stream some quicktime videos (.mov, H.264) from my website. For the standard res. version, it is fine to have it play in the browser, but I would like to have the HD versions open up in quicktime player automatically, like the

  • ANN: Adobe Spry User Group launched

    Hello, I'm honored to announce the launch of the first Adobe Spry User Group, Spry-it. In all the years I have worked with Spry both on a personal as well as on a enterprise level I came to realize that there really little blogging, articles, documen