Journal Header Is Corrupt

HI
powerbook 12",10.5+, 867mhz,640mb ram, has replaced HDD. ical has been closing down not allowing editing - other apps have been playing up as well. I have done permission repairs and in attempting to run defrag I got the following error message, "Unable to open volume; HFS errors. hfs_journal.c:103 (from 0x5958c) (13) The Journal Header is corrupt (checksum 87ce7007, should be 2505bbbe)"
tried researching this - no joy.
Any ideas would be great.
Thanks

Since it happens with another user, either reinstall iCal...
Custom installs in Mac OS X 10.4...
http://docs.info.apple.com/article.html?artnum=301229
"Here's a list of all the custom installation options available with Optional Installs.mpkg:
* Applications
o Address Book 4.0
o iCal 2.0
o iChat 3.0
o iTunes 4.7.1
o Mail 2.0
o Oxford Dictionaries
o Safari 2.0
o X11"
Or a relatively painless Archive & Install, which gives you a new/old OS, but can preserve all your files, pics, music, settings, etc., as long as you have plenty of free disk space and no Disk corruption, and is relatively quick & painless...
http://docs.info.apple.com/article.html?artnum=107120
Just be sure to select Preserve Users & Settings.

Similar Messages

  • AP to GL Transfer: How to have a custom Journal Header name in GL.

    Hi,
    Does any one know how to have a custom Journal Header name while transferring the Journals from AP to GL?
    Assume that I am transferring the Journals which belongs to a Single Invoice from AP to GL. When these Journals are transferred from AP to GL, Oracle will create a Journal Header with the default naming convention. The Oracle generated Header name will be as shown below.
    *123456 Purchase Invoices USD*. Where 123456 is AE_HEADER_ID and USD is the Invoice Currency code.
    Our requirement is, we want the Journal Header name something like *"Hello World"+TIMESTAMP*, where TIMESTAMP is the date when the user has ran the AP to GL transfer program.
    I know that when building the AAD(Application Accounting Definition) we can define the Journal Header descriptions and Journal Lines descriptions. But, I did not see any setup option to have the custom Journal Header name in GL.
    Any suggestions on this issue are highly appreciated.
    Advanced Thanks,
    Lokesh.

    Hi Lokesh,
    The best way to have a custom journal header name is using hook package.
    When data move from XLA to GL, every time it creates a new dynamic table as GL_INTERFACE table but the name is different (XLA_GLT_XXXXX).
    This table information stores in 'gl_interface_control' table but after use the record has deleted from 'gl_interface_control'.
    So we can't see table information after finishing create accounting program.
    What the modification you will made in this table it will affect GL base table. So to made custom journal header name, we need to change Reference4 column of dynamic table.
    Here is the code to get the table name and through procedure you can update the Reference4 column.
    FUNCTION pre_module_hook(run_id IN NUMBER,
                   errbuf IN OUT NOCOPY VARCHAR2) RETURN BOOLEAN IS
    chr_errbuf varchar2(50);
    chr_retcode varchar2(50);
    l_interface_Table_name varchar2(2000);
    BEGIN
         SELECT interface_table_name
    into l_interface_Table_name
    FROM gl_interface_control
    WHERE interface_run_id = run_id;
         BEGIN     
    xxdb_gl_icaccntgdist_pkg.gl_int_jrnl_update (chr_errbuf,chr_retcode,l_interface_Table_name);
    RETURN(TRUE);
         END;
    END pre_module_hook;
    Make these changes in GL_IMPORT_HOOK_PKG.pre_module_hook function.
    You can make the changes through calling procedure using dynamic query under the function pre_module_hook.
    Thanks,
    Gaurav K
    Edited by: 972729 on Nov 22, 2012 4:46 AM

  • Is there any API or Program to Update GL Journal Header DFF Values

    Hi All,
    Can anyone let me know if we have any API or Program for updating GL Journal Header DFF Values.
    Any information will be appreciated..
    Thank You
    Gt1942

    DFFs (Attribute1-15) can be updated directly using custom programs. Oracle doesn't provide any APIs to update Attribute1-15.

  • How to Change Journal Header name to Custom Header Name while GL Importing?

    Hello Experts,
    I am in situation where customer wanted to keep their own journal header naming conventions to imported journals in Oracle GL for custom Journal sources.
    As I know while importing journals, Oracle Creates the Journal Name based on the below mentioned logic.
    "Journal Import creates a default journal entry name using the following format:
    (Optional User-Entered REFERENCE4)(Category Name)(Currency)
    (Currency Conversion Type, if applicable)
    (Currency Conversion Rate, if applicable)
    (Currency Conversion Date, if applicable) (Encumbrance Type ID, if applicable)
    (Budget VersionID, if applicable). If you enter a journal entry name,
    Journal Import prepends the first 25 characters of your journal entry name to
    the above format"
    But then How it is possible to only allow journal header Name to be used present in REFERENCE4 excluding all other string provided by Oracle? Instead of using omitted string custmoer wanted to keep their own parameters. Example - REFERENCE4.A.B.C etc.
    Is it possible to solve this using seeded setup or modifying some hook packages or anything else?
    As far as I know there can be one workaround to be use of updating journal header name after journal import being completed successfully for custom journal source. But only fear is Oracle doesnt allow updating the base table without API. Am I rght?
    So it would be really great if anyone of you can suggest the best solution or best possible workaround.
    Thanks

    Duplicate - How to Change Journal Header name to Custom Header Name while GL Importing?

  • Difference between a Journal header and a journal line

    Hi,
    Can anyone please explain the difference between a Journal header and a journal line ? Can we have different journal headers for the same journal import process?
    Thanks and regards,

    Not sure if I know what you mean, but...
    You can organise your journals by using journal headers. A batch can contain many Headers. Like this:
    Batch No 1
    - Journal Header 1
       - Line 1       Deb: 200$
       - Line 2                      Cre: 100$
       - Line 3                      Cre: 100$
    - Journal Header 2
       - Line 1       Deb: 100$
       - Line 2                      Cre: 100$When you run Journal Import, you run for a specific source. So Yes you can have different journal headers for the same import process (if you mean import from GL Interface).
    Does this clearify?
    Botzy

  • Difference between Journal batch,journal header and journal line

    Hi,
    Can someone explain what the difference between Journal batch, Journal header and journal line are?
    Can we post only one batch at a time?

    Hi,
    Journal Batch means it contains batch name, description, status, running total debits and credits, and other information.
    Journal Headers means it contains batch ID, the journal entry name
    and description, and other information about the journal entry.
    Journal Lines means it contains journal entry header ID, the line number, the associated code combination ID, and the debits or credits associated with the journal line.
    So here is the replationships.
    Batch--- 1 to many -- Headers
    Headers -- 1 to many -- Lines.
    --Basava.S                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Payroll Journal Header Form

    Hi all,
    I have a question regarding US Payroll Journal (RPCLJNU0) .
    By using the standard the UJT1 header form I have created a custom form for my client in pe51
    When I execute the form the Payroll area shows up as an astrick ( *) instead of displaying the payroll area the PERNRs belong to in the HEader of the form.
    Eg: Payroll Area:  *
    I have picked the Payroll area from VERSC - ABKRS.
    Is there any additional config  I have to do for the payroll area to be displayed.
    Any input will be greatly appretiated.
    Thanks a ton in advance.

    Take the help of your ABAPer and check the Form you are using.
    Veer

  • XRAID will not mount (journal corruption)

    Hi guys, I have seen in the unanswered archived article below someone with exactly my problems, really looking for a solution to repair the journal within OS X, any ideas or developments on this?
    http://discussions.apple.com/thread.jspa?threadID=374802
    [QUOTED HERE FOR CONVENIANCE]
    Our XServe-RAID volume does not mount anymore.
    On /var/log/system.log I could find the following.
    Feb 22 09:52:29 ogre kernel: jnl: open: journal magic is bad (0xf433f != 0x4a4e4c78)
    Feb 22 09:52:29 ogre kernel: hfs: early jnl init: failed to open/create the journal (retval 0).
    It appears the the journaling is badly screwed.
    I tryed to disable the journaling with no success.
    Here is what I get.
    ogre:/var/log root# diskutil disablejournal disk2s3
    The selected journaling request does not appear to be valid
    After some googling I foud this :
    --- QUOTE ---
    The more serious problem occurs when the contents of the journal file are corrupted in such a way that the operating system does not proceed to mount the volume. Fortunately, this problem is extremely rare (I have seen three reports of corrupted journal files at the MacFixIt Forums and Apple Discussion Forums since journaling was introduced).
    Starting the journaling process is one of the first things that happens when a volume is recognized by the operating system. There appears to be no way to intervene in single-user mode before the journal file is examined. A message about �bad journal magic� appears. Unfortunately, every tool that can turn off journaling requires that the volume already be mounted.
    If the Macintosh can boot under Mac OS 9, dealing with a corrupted journal file involves restarting under Mac OS 9 and using a utility (such as our free TechTool Lite 3.0.4) to make visible these two invisible files at the root level of the volume:
    /.journal
    /.journalinfoblock
    Once these files are made visible, they can be deleted. Mac OS 9 does not support journaling, and recognizes nothing special about these files. Neither of these files can be seen in Terminal or in single-user mode in Mac OS X, because they are considered part of the disk directory.
    If the Macintosh cannot boot under Mac OS 9, but the drive has Mac OS 9 drivers installed, the drive can be put into a Macintosh that can boot under Mac OS 9, or an ATA/IDE drive can be put into a suitable FireWire enclosure.
    The possibility that you may need to remove these two journaling files under Mac OS 9 is a good reason for making sure that all volumes you use have Mac OS 9 drivers installed. The drivers must be installed when the drive is formatted.
    --- [END] QUOTE ---
    Well, my disk is part of a RAID 5 matrix.... so the solution proposed does not work.
    Does any one of you has any idea ?
    [/QUOTE]

    I was using RAID mirroring two SCSI drives under 10.3 Server (not an XRAID) and found that one of the two drives was slow to spin up. The System considered the array "degraded" and made the single "good" drive available. Nothing that made any sense to me could get the "slow" drive repaired or re-incorporated into the pair. I could not split the pair for repair, and even after it was spun up and ready, Disk utility refused to repair it or its acceptable partner.
    What I eventually did, with some coaching from the forums and re-reading of Mac Help, was to delete the errant drive from the set. Then I could erase it as a single drive, and add it back into the set as a "new" replacement drive. The system accepted it as a replacement, and brought it up-to-date with its partner automatically.
    I have got to say that I find this approach completely counter-intuitive. It gives me the creeps to delete the "safety-net" drive, even if I know it is mucked up.
    But perhaps a similar approach would work in your situation.

  • Automatic TAX in GL During Journal Import

    Need a solution for the following.
    Need to integrate Payroll data from a legacy system to General Ledger. GL Interface and Journal Import feature is being used. The Journal Lines will carry the TAX code and will have to automatically create the TAX lines in the Journal after import.
    There is an option to automate TAX line generation during Manual Journal Entry by setting the TAX Required field in the Journal Header to 'Required' and supplying a Tax Code in the Line. However, on inspecting the GL INTERFACE tables for journal import there is no provision to set the Tax Required field to Required in the GL_INTERFACE table ( no column exists ).
    Does anybody if Oracle GL supports the Automatic Tax calculation option for imported journals. Is the feature only available for Manual Journal Entry ?
    Any workarounds possible to use the standard GL functionality for imported journals as well.

    As I know, Calculate tax feature works only if you are entering manual journal entries from screen (not tried using ADI) (this is for customers who do not have any other modules others than GL and you are using manual entries).
    I am aware of this functionality.If you are coming from the gl_interface, I guess, oracle assumes that you are already calculating the tax in the source system and importing the tax entries.
    Journal Import on GL_INTERFACE does not support this feature.
    But if you want to calculate the tax using the gl_interface, there are ways to do that. You can make use of the gl_import_hook_pkg.post_module_hook. This is called at the end of the Jounal Import program. This takes batch_ids in that are being created in the journal import. You can create a package that takes these batch_ids in and calculates tax using gl_calculate_tax_pkg.calculate_tax. This creates additional journal lines for you at the tax rate with the tax account that is setup. This involves some updates to gl_je_batches, gl_je_headers, gl_je_lines, but works. Let me know if you need more details.
    Thanks
    Nagamohan

  • How Journal Import groups records from GL_INTERFACE into Journal Headers

    Hello,
    Does anyone know how or using what criteria the Journal Import program groups records from the GL_INTERFACE into Journal Entries?
    The R12 User Guide says that :
    REFERENCE4 (Journal entry name): Enter a journal entry name for your journal entry.
    Journal Import creates a default journal entry name using the following format:
    (Category Name) (Currency) (Encumbrance Type ID, if applicable) (Currency
    Conversion Rate, if applicable) (Currency Conversion Date, if applicable) (Originating
    Balancing Segment Value), chopped to the first 100 characters. If the above results in
    multiple journals in the same batch with the same name, then additional characters are
    chopped off, and a 2, 3, 4, and so on, is added for the second, third, fourth, journals with
    the same name.
    Does it mean that for every unique combination of Category+Currency+CurrencyConversion Rate+Currency Conversion Date, a Journal Header would be created? I also found that although not mentioned in the user guide, if the Accounting Date within a group of records is different, the import program includes accounting date to the above criteria and tries to create separate Journal headers per Accounting Date.
    Also, is there a way to override this ( Category+Currency+CurrencyConversion Rate+Currency Conversion Date) criteria?
    Thanks,
    Manish

    any suggessions on the above query?
    Thanks & regards
    Aboo

  • Corrupt Quicktime files from Photo Booth

    I am looking for a Quicktime Repair tool for Mac OS X. Preferably one that can re-write / repair quicktime headers and verify video frame data.
    My 4 year old was making some videos of himself using Photo Booth .. my wife and I happened to discover them a few nights ago and were laughing like mad.. was really cute and funny.
    So you can imagine our disappointment when I logged in today to move them to a safe location, and couldn't see them in Photo Booth. Inside the filesystem, I show a number of .MOV files, but none of them play. Quicktime reports "The movie could not be opened. The file is not a movie file." Seven of Fourteen movie files in the folder are like this.
    When I try to open it in VLC I get a more detailed message in the message log (i've snipped it down for the purpose of posting)
    access_file debug: opening file `/Users/family/Movies/kids/Movie 36.mov'
    main debug: using access2 module "access_file"
    main debug: pre-buffering...
    main debug: received first data for our buffer
    main debug: pre-buffering done 851942 bytes in 0s - 7424 kbytes/s
    mp4 debug: found Box: mdat size 32
    mp4 debug: skip box: "mdat"
    mp4 debug: found Box: wide size 8
    mp4 debug: skip box: "wide"
    mp4 debug: found an empty box (null size)
    mp4 debug: dumping root Box "root"
    mp4 debug: | + mdat size 32
    mp4 debug: | + wide size 8
    mp4 debug: file type box missing (assuming ISO Media file)
    mp4 error: MP4 plugin discarded (no moov box)
    I loaded the file up in 0xED (a Hex Editor) and the raw file header in the BAD files look like:
    ... mdatftypftypftypftypftypftypftyp....wide....mdat h.......
    while a GOOD file header look something like
    ... ftypqt ....qt ......................wide...smdat h .....
    It looks corrupt to me. I'd like to think that only the header is corrupt, and the video data is still fine ... I was curious if anyone had anything to assist?
    The only other thing I could think about doing was hexediting the headers, swapping good headers for the bad ones (since the videos from Photo Booth share the same dimension, FPS, etc)

    Was this problem ever solved? I've got a similar one.

  • BPC 7.5 NW on BW 7.3 - Unable to post journals

    Hello All,
    We're in the process of migrating from MS to a Netweaver version of BPC 7.5
    A few issues and one of the main ones relates to posting journals.
    On a consolidation app I try to post a journal and it disappears.
    Here's what I do.
    1 Created Journal Template in FINANCE Application in BPC Admin client.
    2 Clicked on New Journal in BPC Excel client under Journals.
    3 Saved journal after filling dimension members.
    4 Journal ID is created, after which message "Journal successfully saved" is displayed.
    5 Data is listed in backend Journal header table UJJ_JRNL and detail table /1CPMB/VHYKMJRDT.
    6 Now Journal data disappears from Journal entry layout
    Also, if I run a journal report no journal is displayed.
    I've tried it in various appsets and the same problem.
    Any help gurus?
    Thanks
    Paul

    Hi Paul,
    This issue is fixed with BPC NW SP10.
    Regards
    Tony

  • Controller Issue? or Journaling issue?

    Hello all
    I've been having this weird problem with an Xserve RAID and a couple of servers. I've been narrowing down the problem but i want to get some outside input on the matter.
    I have 3 Xserves, 2 connected to the Xserve RAID (DAS). The problem seems to happen at random, but always starts with the same xserve. The problem seems to be a hardware issue, form what i can see in the logs, the tricky part is identifying the piece of hardware responsible. Here are all the clues i've been gathering:
    Problem starts with
    kernel[0]: jnl: disk0s3: flushing fs disk buffer returned 0x5
    Sometimes followed by
    kernel[0]: ASP_TCP CancelOneRequest: cancelling slot 0 error 89 reqID 1 flags 0x9 afpCmd 15 so 0x1006ecc0
    And then the horrible
    kernel[0]: disk0s3: I/O error.
    kernel[0]: jnl: disk0s3: dojnlio: strategy err 0x5
    kernel[0]: jnl: disk0s3: writejournalheader: error writing the journal header!
    kernel[0]: disk0s3: I/O error.
    The inevitable screams from the services that depend on that volume follow
    imap[3899]: IOERROR: opening /Volumes/Userdata/spool/imap/user/jcarrillo/cyrus.header: Input/output error
    This causes AFP to stop and also imapd, both of which have their working directories within that volume. These are disks 8-13 in one hfs+ journaled volume in the raid.
    At first we thought the HBA's were the ones misbehaving, but after swapping slots and swapping servers, the same server is the one failing us.
    This error doesn't make the server unresponsive, only processes or actions that involve reading the volume, hang, that includes a simple ls in /Volumes or diskutil.
    We've already fsck both disks with no red flags. Here's the diskutil list
    diskutil list
    /dev/disk0
    #: TYPE NAME SIZE IDENTIFIER
    0: Applepartitionscheme *2.3 Ti disk0
    1: Applepartitionmap 64.0 Ki disk0s1
    2: Apple_HFS Userdata 2.3 Ti disk0s3
    /dev/disk1
    #: TYPE NAME SIZE IDENTIFIER
    0: GUIDpartitionscheme *76.3 Gi disk1
    1: EFI 200.0 Mi disk1s1
    2: Apple_HFS hd 76.0 Gi disk1s2
    I will be performing the Apple Xserve Diagnostics later tomorrow to discard server side hardware, but i am inclining towards a raid controller malfunction. Anyone seen something similar?
    AFAIK there is no xserve raid diagnostics from apple, is there a third party tool? Or is it just a replace component thing and hope for the best?
    Anything to help me be sure to blame the controller will be greatly appreciated.

    You're only reporting logs from the host system. What about the logs on the RAID itself?
    This sounds to me like a problematic array - possibly caused by a faulty disk. RAID Admin should give you some insight as to whether that's the case or not.

  • Error Bad Pool Header

    I have had
    frequent crashes (windows 8)
    with the message Bad Pool Header
    Looks like tel something to do with
    the network card, it only happens
    when I'm using wifi and I'm a bit
    far from the router, with a slightly
    weaker signal.
    Is it related?
    What do I do to solve?
    My laptop is a Dell Inspiron
    14z.
    Thank you.

    Not a problem! Luckily, in this case, it appears that a kernel-dump won't be necessary.
    We have a few bug checks:
    BAD_POOL_HEADER (19)
    This indicates that a pool header is corrupt.
    BugCheck 19, {d, ffffe0000f4fc32f, c2fbebf7646c9270, 6c2fbebf7646c8d}
    ^^
    2: kd> !pool ffffe0000f4fc32f
    Pool page ffffe0000f4fc32f region is Unknown
    ffffe0000f4fc000 size: e0 previous size: 0 (Allocated) klpt
    ffffe0000f4fc0e0 size: a0 previous size: e0 (Free) Free
    ffffe0000f4fc180 size: e0 previous size: a0 (Allocated) klpt
    *ffffe0000f4fc260 size: d0 previous size: e0 (Allocated) *KLsc
    Owning component : Unknown (update pooltag.txt)
    ffffe0000f4fc330 size: 50 previous size: d0 (Free ) KLWp
    ffffe0000f4fc380 size: 100 previous size: 50 (Allocated) KPXY
    ffffe0000f4fc480 size: a0 previous size: 100 (Allocated) dlib
    ffffe0000f4fc520 size: d0 previous size: a0 (Allocated) KLsc
    ffffe0000f4fc5f0 size: 90 previous size: d0 (Allocated) KLsm
    ffffe0000f4fc680 size: d0 previous size: 90 (Allocated) KLsh
    ffffe0000f4fc750 size: 250 previous size: d0 (Allocated) @GM2
    ffffe0000f4fc9a0 size: d0 previous size: 250 (Allocated) KLsc
    ffffe0000f4fca70 size: d0 previous size: d0 (Allocated) KLsh
    ffffe0000f4fcb40 size: 250 previous size: d0 (Allocated) klxm
    ffffe0000f4fcd90 size: 40 previous size: 250 (Allocated) klqi
    ffffe0000f4fcdd0 size: 90 previous size: 40 (Allocated) KLsm
    ffffe0000f4fce60 size: d0 previous size: 90 (Allocated) KLsh
    ffffe0000f4fcf30 size: d0 previous size: d0 (Allocated) KLsh
    ^^ The pool block we're looking at within the pool page belongs to LKsc (unknown). When the owning component is unknown and it alerts to update the pooltag, it's generally a 3rd party driver causing corruption.
    We can confirm this by checking the call stack:
    2: kd> k
    Child-SP RetAddr Call Site
    ffffd000`2256ea48 fffff803`83f18167 nt!KeBugCheckEx
    ffffd000`2256ea50 fffff803`83f17a03 nt!ExFreePoolWithTag+0xe97
    ffffd000`2256ead0 fffff800`023e9918 nt!ExFreePoolWithTag+0x733
    ffffd000`2256eba0 fffff803`83c86000 klwfp+0x7918
    ffffd000`2256eba8 ffffe000`00000000 nt!_guard_check_icall_fptr <PERF> (nt+0x0)
    ffffd000`2256ebb0 ffffd000`2256ebe8 0xffffe000`00000000
    ffffd000`2256ebb8 00000000`70574c4b 0xffffd000`2256ebe8
    ffffd000`2256ebc0 fffff800`01992d60 0x70574c4b
    ffffd000`2256ebc8 fffff800`019aaf51 klflt+0xed60
    ffffd000`2256ebd0 ffffe000`0048d2f0 klflt+0x26f51
    ffffd000`2256ebd8 ffffc000`1197b000 0xffffe000`0048d2f0
    ffffd000`2256ebe0 ffffe000`0e0282b0 0xffffc000`1197b000
    ffffd000`2256ebe8 ffffe000`0e09a2a0 0xffffe000`0e0282b0
    ffffd000`2256ebf0 00000000`00000000 0xffffe000`0e09a2a0
    ^^ The driver(s) which appeared to have attributed to the corruption are Kaspersky drivers, with
    klwfp.sys (the one that caused the pool corruption) specifically being the Network Filtering Component driver for Kaspersky.
    DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)
    This indicates that a kernel-mode driver attempted to access pageable memory at a process IRQL that was too high.
    A driver tried to access an address that is pageable (or that is completely invalid) while the IRQL was too high. This bug check is usually caused by drivers that have used improper addresses.
    Unable to load image \SystemRoot\system32\DRIVERS\klwfp.sys, Win32 error 0n2
    *** WARNING: Unable to verify timestamp for klwfp.sys
    *** ERROR: Module load completed but symbols could not be loaded for klwfp.sys
    Probably caused by : klwfp.sys ( klwfp+6ff1 )
    ^^ Once again, the Network Filtering Component driver for Kaspersky.
    Remove and replace Kaspersky with Windows 8's built-in Windows Defender for temporary troubleshooting purposes as it appears to be causing NETBIOS conflicts:
    Kaspersky removal - http://support.kaspersky.com/common/service.aspx?el=1464
    Windows Defender (how to turn on after removal) - http://www.eightforums.com/tutorials/21962-windows-defender-turn-off-windows-8-a.html
    Regards,
    Patrick

  • Head Scratcher: PMG5 boots off of OEM disc, not off of HD

    Here's a stumper:
    Out of Warranty PM G5 1.6ghz machine, came into shop,
    for MLB replacement ( wasn't putting out video, although vid card tested fine.)
    Replaced MLB, made sure to reseat all items, and booted off of diagnostic disc
    to set Thermal Calibration ( for Processors) and to make sure everything was ok.
    Everything passed muster, and then rebooted machine to boot off internal HD.
    The PowerMac boots up, but never gets pass Grey Apple screen, and instead
    wheel (gear) spins and then stops, and fans run.
    Rebooted machine off of Diagnostic disc, passes muster again.
    Rebooted off of Tiger 10.4 OEM disc, boots fine no issues.
    So I erase the HD, and reinstall Tiger, goes smooth,
    Machine reboots, same thing: will not go pas Grey Apple Screen.
    Verbose mode says that the stopping point is:
    NVDANV30HAL LOADED AND REGISTERED.
    nothing beyond that....
    anyone with any ideas?

    Scrap the hard drive. Even installing Tiger doesn't test for weak sectors, even if you were to zero the drive that isn't a guarantee. TechTool Pro is pretty good at media scan but it does not attempt to map out bad/weak sectors.
    If the directory and/or journal is 'shot' (corrupt / damaged) AND you forced it to create new partition tables (not just erase the user volume), then I'd say it has suffered enough and dead and needs to be replaced.

Maybe you are looking for