Outline extrator issue

i have tried extraction of outline using outline extraction olap one
but i wnat to extarct in | pipe delimited with text format in level order
so i wrote in this way in my bat file
win exportdim.exe localhost /admin /password / app/ db/ Product/ c:\temp\temp.txt/ |/ Lev/ 1/Text
but it cant work
and if i change pipe to ! then it will but what i was surprised was i can see | =pipe delimited whn i do manually i get the result
So what all i want to say is below bat file dont work as pipe delimited to textformat
win exportdim.exe localhost /admin /password / app/ db/ Product/ c:\temp\temp.txt/ |/ Lev/ 1/Text
Help
regards

I am not sure if you can use |, can you not just use ! or a different separator.
If it is vital you use | then you could do a find and replace on the extract.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • Creating Rounded Rectangle Outline Colour Issue

    Hi all,
    I am wanting to create a rounded rectangle, with an outline or boarder with a colour black (Or other colours). But i am unable to find where i'm able to add an outline colour as an option before drawing out the shape or after drawing the shape.
    The only option i am able to do is changing the background colour of the shape created, and i was wondering if you are able to add an outline when creating or after creating a rounded rectangles or other shapes...
    Thanks,
    James

    But i am unable to find where i'm able to add an outline colour as an option before drawing out the shape or after drawing the shape.
    Styles -> Stroke.

  • Automate smartlist  using outline utility issues

    Hi,
    iam want to automate the smartlist using .csv for entity dimension member using outline utility
    Create a smart list called "alnasser" entity dimension member E10134
    alnesser smarlist contains a)pricing b)sales c)prefixing sales d)afterpricing d)Test factor has two child
    iam not to see the smart list members under alnasser
    source file as follow below
    Entity,Parent,Data Storage, Description,Data Type,Base Currency,Plan Type (Plan1),Aggregation (Plan1),Smart List,ALNASSER
    E10134,,Store,ALNASSER,Unspecified,USD,1,+,ALNASSER,Sales
    its show an unrecognized column or header name
    how to give smart list source with entity dimension and where i have made the mistake and give source for testfactor
    i guess error with the smartlist header name alnasser
    Sreekumar.H

    hi john,
    alnasser smartlist got created when i gave this source file
    Entity,Parent,Data Storage, Description,Data Type,Base Currency,Plan Type (Plan1),Aggregation (Plan1),Smart List,ALNASSER
    E10134,,Store,ALNASSER,Unspecified,USD,1,+,ALNASSER
    i want to give under alnasser a)pricing b)sales c)prefixing sales d)afterpricing d)Test factor
    i can manually give using the business rules section,where u can associate with dimension name and the data type
    but i want the smartlist alnasser to be displayed in the same csv file in the entity
    is there way for this ...
    or can u illustrate the source for the smartlist
    Sreekumar.H

  • Help! Outline path issue with CS6 & CC...

    So downloaded CC on the Macbook Pro Retina, running Mavericks & noticed inaccurate paths generated when using a width profile - see my previous post. Decided to install my version of CS5 by remote disk. This worked, but is so pixellated it is barely usable. Rememeberd that Adobe had offered free CS6. Downloaded & installed this, but found it has the same issue as CC.
    Please can someone tell me if I have to change a setting or something, because the test below shows that the way CS6 & CC are currently working is just not acceptible. If this is just the way it is I will have to cancel my subscription as this shows that Adobe have had two releases to sort this out, & just stick with CS5 that I
    can actually rely on.

    Ah, so it is not just me as everything I have read has just mentioned the align to pixel like your first reply. The problem is that if you simplify a circle, even if it is very similar it is still no longer a circle. Also, this another step so the new software is potentially taking longer to do the same task.
    The software should also be getting more accurate not less. I admit these are zoomed in at a very high level, see below, but it is clear that the simpler result achieved by CS5 with less points does produce a better, more accurate curve. I can imagine these would be quite noticeable if you create a logo & it is blown up to a large scale on the outside of a building or at a conference.
    From what I have been reading CS6/ CC are the most bug ridden releases to date. I have only just started using CC as the new laptop only arrived this week, but already I have noticed that the menus on Acrobat are pixellated. How did they not notice that??

  • Outline memory issues

    I tried to open an outline from another Essbase installation and I got the error "Not enough memory to open Outline, Outline Editor will now close". The size of the outline is only 800KB. IS it due to some problem with the outline file or do I need to adjust some memory settings?

    Belynda,
    Assuming you're using one of the working outline templates... As I mentioned, "Note than each level in the outline has a beginning tag that restarts the outline level when you begin a new outline and a Continue tag that you use within an outline."
    So you'd use the 1Level, 2Level, 3Level, or 4Level tags to start numbering from I,A,1, or a in a new outline.
    Use the series of Continued tags to continue numbering within an outline.
    Art

  • Failed Hardware Scan and other issues E440

    Hi all,
    This is probably more rant than anything, but I wanted to give a heads up to others too.
    I have a ThinkPad E440 that is a year old. From the very first time I turned it on, there have been issues. The first hardware scan (via Lenovo Solution Center - LSC) showed a warning for the Intel Dual Band Wireless-AC 7260 Local Connection Test. There were also tons of System Events that always show up in the "Configuration History" part of the LSC. You can look at the calendar and tell exactly which days I used the computer because there will be System Events generated each day. Things like app crashes and failed drivers.
    In July 2014, I got the first warning for the 16 GB SSD - the SMART Short Self-Test. By February this year it showed as failed for each hardware scan (these were initially set up to run monthly).
    Also the whole time I've had it the touch screen would just stop working at some point and I would have to reboot to get it working again.
    I finally called Lenovo on March 30th, before my warranty expired. When I called that time, I didn't realize the hard drive failure was the SSD. So they sent me a new 500 GB drive. I also added the other things into the case when I talked to them. For the wireless issue they suggested making sure the driver was up to date. I did this and let them know when I called back that it was up to date and still having the warning. So I called them to tell them to tell them about the wireless and also that I realized it was the SSD having the failure, not the main drive. The first case had already been closed even though none of the other items were addressed.
    So they opened another case (this is #2). They said to mail them the laptop since the wireless issue would probably be on the board and it wasn't something I could fix myself. They sent a box with a prepaid overnight shipping label. I was very sick for a few days so I sent it back to them on April 10th (a Friday). Via UPS I saw it was delivered on Saturday. Work was performed on it Monday, April 13th and sent back to me that very day. I received it on April 14th. This part of the service has been excellent - very fast response.
    Being in IT, I included a letter with the laptop that outlined the issues that should have been in the case. I also printed the hardware scans and what the system events looked like.
    When I got the laptop back, the sheet inside said they had replaced the Speaker because of Distorted Sound. This was not even on the list even though I had noticed it. I didn't even power up the laptop before calling them again - yes, I was furious! Plus our power was out...
    So this was noon on the 14th. They opened case #3 and sent me ANOTHER BOX so I could send it back.
    After our power came back on the 15th, I powered up my laptop. I opened the browser (I have it set to restore the previous session) and there was a sexually explicit video on YouTube. I opened the other browser and there was a different video on YouTube. So this person was watching YouTube instead of fixing my laptop. I looked through both browser histories and there was quite a bit of activity while my laptop was at the repair center... I ran the hardware scan - still failed and a warning for the wireless. They really hadn't done anything.
    I also found two pictures of the repair person in the recycle bin...
    So I called back. I was LIVID! They opened another case (this is #4). And sent me ANOTHER BOX. I finally learned the other day that once a case is opened, it cannot be edited or added to at all. Instead, they close the other case and open a new one. I guess their turnaround time for closing cases is excellent! I've never seen a system like that - and I've used a lot of them.
    I got a really nice, patient fellow on the line. He took all my info (again). I emailed him the pictures, screen captures of the YouTube videos, the letter I had sent - everything. He entered as much into the new case as he could - he talked to one of the supervisors to make sure he did it right. Somehow he flagged it so that the laptop would get more attention (time) at the repair facility. He also opened a separate case (an escalation ticket?) for a supervisor to call me regarding the person's conduct at the repair facility. He said they would call me that day. (It's now the 25th and I've never heard from anyone)
    So, he sent me ANOTHER BOX. I've built up quite a stack of them.
    Our power was out AGAIN from the 17th through the 19th (don't get me started).
    I noticed a hardware scan had now gotten a failure on the main hard drive. So I called them on the 21st to add this to the case before sending the laptop back. The girl said they can't add anything to an existing case or edit it at all once it's opened. She would have to open a new case and SEND ME ANOTHER BOX. I told her to forget it because I was ready to send it in and didn't want to wait for another box. I also asked for a status on that "escalation case" where the supervisor was supposed to call me. In order to do this she, yes, wait for it, had to open ANOTHER CASE!! So they would know I wanted a status. I'm completely dumbfounded.
    So I sent it back on the 21st. This time I practically wiped it. I had already removed all my files the last time, but I had left my bookmarks and browser history intact.  I set up a guest logon with admin privileges. I updated my letter and printed off more stuff to include with the box. On one sheet I had only the case number, the serial number and machine type. On another sheet I had "DO NOT SEPARATE THIS PAPERWORK FROM THE LAPTOP" and the case number. I put this sheet on top (The guy on the 15th said my letter and stuff may have gotten separated from the laptop once it was delivered to the repair facility). I used a ton of staples so it would all stay together. I included in my letter the failure on the main hard drive and asked if they could look at it. I wrote about having to open a new case if I wanted to include it.
    They received it on the 22nd. A nice gentleman from the repair facility called me that day asking about the password. that. was. written. on the sheet they have you fill out. I told him what happened last time and also mentioned the hard drive failure and asked if he could look into it. He said they would.
    I received my laptop back yesterday morning.The sheet that came with it said they had "replaced the following parts to complete the repair of your laptop."
    Part Description                                           Symptom
    IMAGE                                                             Replaced due to engineering change
    System board                                                 Network card error
    Hard disk drive                                                Network card error
    ECA-WIRELESS                                            <no symptom listed>
    There was also a sheet saying they had installed a factory preload of software and I needed to install Lenovo and Windows updates.
    When I booted it up, the first thing I noticed, in the lower right corner was:
    Windows 8.1
    SecureBoot isn’t configured correctly
    Build 9600
    I ran a hardware scan. Well, I tried. It stopped part way through and said it finished successfully but most of the tasks showed up as cancelled. I tried to run it again - issues - rebooting ensued. It said the LSC wasn’t available and that I should try again or reboot.
    Tried several times. Then got what I guess is the new BSOD - kinder, gentler:
    Your PC ran into a problem and needs to restart. We're just
    collecting some error info, and then we'll restart for you. (xx% complete)
    If you'd like to know more, you can search online later for this error: DRIVER_CORRUPTED_EXPOOL
    Even though the LSC said my Lenovo files were all up to date, I ran the Update. And first I had to download a new version of Update. Then I downloaded all of the Lenovo updates and installed them (there were quite a few). The BIOS update failed. While I was doing the Lenovo downloads, I got a light blue screen but no text (I was out of the room so I'm not sure what happened). Did CTRL-ALT-DEL and it shows only IE and Task Manager as applications that are running. Could not “Switch to” IE. Hitting window key to go to start didn't do anything. So I had to restart.
    By 3pm yesterday there were 34 system events in the configuration history.
    I ran the hardware scan again after I updated the Lenovo files, and you guessed it! Failure on the SSD (SMART Short Self-Test) and warning on the wireless. Nothing had changed. Except hardware scan is acting different than it did before I sent in the laptop for repairs. When it finishes, it instantly closes and just shows 100% complete. When I click on "see last results" it shows a screen called
    Log Information,
    Canceled 04/24/2015 n:nn pm 
    You have not done a hardware test on your computer
    And the calendar in LSC only shows the very first hardware scan I did on Friday. Even the hardware scan screen shows the date and time of the last scan. It also shows the error code. In order to see exactly what is failing, I have to sit there and watch it very closely and snap a picture of the screen as soon as the error (or warning) shows up.
    When I would try to run Windows update, it would hang up PC Settings. I couldn't even kill it using task manager because it didn't show up as a task. During this, I got a flag saying the firewall wasn't turned on. I tried to turn it on, but clicking on Turn on Windows Firewall didn't do anything. I tried to setup my Microsoft account but that just hung too.
    I ended up running Windows Update FOUR TIMES to get all the updates installed. Every time I ran it, it said "Done!" and I would run it again and more would show up. The last time was this morning.
    At some point, the error about SecureBoot went away.
    Then, I created a bootable BIOS update disk. Following the ReadMe instructions, I went through ThinkPad Setup and verified several values. Of note:
    Secure Boot was DISABLED. According to the ReadMe file, this should be ENABLED in Windows 8.1. I enabled it.
    Under Startup/Boot, according to the ReadMe that came with the BIOS update, UEFI/Legacy Boot is supposed to be set at UEFI Only for Windows 8.1. Mine was set to "Both". I changed it.
    In Startup, OS Optimized Defaults was DISABLED, even though it says right there (and in the BIOS update ReadMe) it should be ENABLED to meet Microsoft Windows 8 Certification Requirement.
    After these updates, I flashed the new BIOS.
    Then, I ran hardware scan again...
    Now I have TWO failures on the SSD: Random Seek Test and SMART Short Self-Test. Great.
    In the Event Viewer (that I recently discovered), it says my disk has a bad block. It just says The device, \Device\Harddisk\DR1, has a bad block. I assume this is the SSD...
    There are 867 events in the event viewer - Critical, Error, and Warning...
    Fifty-two of these are from October 7, 2013 - before my little laptop was a glimmer.
    The rest are from when Lenovo had it and yesterday and today.
    64 of them are the disk error.
    341 are from DeviceSetupManager. 65 of those are from failed driver installs. 69 are for not being able to establish a connection to the windows update service. 64 are from not being able to establish a connection to the Windows Metadata and Internet Services (WMIS).
    3 times it's rebooted without cleanly shutting down
    60 of them are from Service Control Manager and say The TDKLIB service failed to start due to the following error: The system cannot find the file specified.
    One of them says {Registry Hive Recovered} Registry hive (file): '\??\C:\Users\Default\NTUSER.DAT' was corrupted and it has been recovered. Some data might have been lost.
    16 are warnings that various processors in Group 0 are being limited by system firmware.
    12 say the certificate for local system with thumbprint <bunch of hex numbers> is about to expire or already expired.
    108 are warnings for failure to load the driver \Driver\WUDFRd for various devices
    16 are application errors
    One is for the computer rebooting from a "bug check"
    15 are for name resolutions timing out after none of the configured DNS servers responded.
    10 are for SecureBoot being disabled.
    14 for services terminating unexpectedly
    15 are for WLAN Extensibility Module has stopped
    61 are for applications not being able to be restarted because the application SID does not match Conductor SID
    12 are for activation of CLSID timing out waiting for the service wuauserv to stop
    So, I'll call them on Monday and open. a. new. case (#5?) - but really 7. And get A NEW BOX.
    I'll keep you updated!

    Hi amycdero and welcome to the HP Forum,
    I understand that you are having scanning and printing issues after upgrading to Mavericks OS X v10.9.1. I will try my best to help you resolve this issue.
    In this document for Mac OS X: Scanning Software Does Not Open or Stops Responding are steps the may help you with your scanning issue.
    This document for Fixing Ink Streaks, Faded Prints, and Other Common Print Quality Problems should help with the streaking printing issue.
    I hope this information is helpful. Please let me know.
    Thank you,
    I worked on behalf of HP.

  • Problems opening outline: Not enough memory available

    Dear all,
    since about 3 or 4 weeks I have problems with opening an outline in EAS console: After a double klick on the outline and about 1 minute waiting the following message pops up: "There is not enough memory available on the Administration Server to open this outline." If I try to open the outline in "view mode" everything works fine. I dont know where the problem is - I have restarted the whole system, all other applications/databases are down - where is the memory?
    Does anybody know this problem? How can I trace it? How can I see/change memory settings? We are working with Oracle Application Server on Windows NT as container for APS, HSS, AAS (EAS). Essbase Analytic Server is installed and running on HP UX.
    Thanks in advance!
    Regards
    André

    This wouldn't be an ASO database by any chance, would it?
    Because if it is, you're likely seeing the effects of a fragmented outline.
    It seems awfully big for the number of members you are describing.
    Take a look at this thread: Re: ASO too large for techniques on how to get around the ASO outline fragmentation issue.
    It might at least be worth trying to build the dimensions in a new database and see if the error occurs. If it does not (as this will be a fresh database, it won't be fragmented) you have pretty convincing proof that the issue is related to ASO outline fragmentation.
    Regards,
    Cameron Lackpour

  • Quick Outline in EA3

    The quick outline does still not appear to be working properly for me in EA3
    If i open a package from a FIND DB object search and choose quick outline all i get is the package name listed at the top of the outline with nothing below it.
    If i open a package from the navigator tree and quick outline i get a few local variables/cursors listed but none of the main procs/fun cs etc
    As i type i've gone back to check another scenario and when i now do the outline it appears to be picking it up, so not sure if it's a outline loading issue or my machine.
    However the outline is still not picking up when you switch between packages you have to force a new outline for each package you look at
    Thanks
    Paul

    Reported previously
    30EA4 3.0.03 Build 03.97 Bug: Package Body functions are not visible

  • AE and Computer- Audio dropping out and stuttering issues

    Hi all,
    I recently bought an Airport Express 2 weeks ago. I hooked the unit up in my bedroom via ethernet(roaming) method.  I use Airplay and turn on Computer and Airport Express to play both in my living room and my bedroom at the same time. Problem is I'm noticing my iTunes in my living room exhibits some stuttering during songs or the audio will start to drop out intermittently and sometimes shuts off completely but the Airport Express is still playing in my bedroom without issue. I've tried turing off the wireless but it doesn't do anything. When I turn off AE in Airplay and just play using the Computer it works fine. I'm not sure what can be causing the interference but it seems its between my computer and AE when they are both turned on.  I would appreciate any suggestions forum members have to help me resolve this issue. I have the 2nd generation AE.  Thanks a lot.
    Macthemini

    Hi Macthemini,
    Thanks for using the Apple Support Communities. It sounds like you are experiencing issues with AirPlay between iTunes on a Mac mini and your AirPort Express. The following resource outlines troubleshooting issues for AirPlay performance:
    iTunes: Troubleshooting AirPlay and AirPlay Mirroring - Apple Support
    http://support.apple.com/en-us/HT203822
    Troubleshooting performance issues
    Wi-Fi Connection
    If you are experiencing intermittent playback or significant network lag with AirPlay, it could be due to a weak Wi-Fi connection, interference, or the distance between the Wi-Fi router and your AirPlay-enabled device.
    Ensure that other devices aren't trying to stream to the same Apple TV at the same time.
    Ensure that your Wi-Fi router is set up with the recommended settings for the best performance.
    Certain external devices, such as microwave ovens and baby monitors, may interfere with a Wi-Fi network. Try moving or disabling these devices.
    If your wireless and wired networks are the same, try connecting your Apple TV to the router via Ethernet instead of Wi-Fi.
    Use the Wi-Fi network troubleshooting guide to resolve interference and other issues.
    - Matt M.

  • Visible square outline on iMac 27" display

    This is a tricky one to explain but here goes. I got my 27" iMac a few weeks ago - love it. However, there's one small issue. About a week after I bought it, I noticed that there's an outline of a square on one half of the display - it's very faint but it's definitely there. It can be seen against any background ranging from white to dark grey and can be seen during videos and games if you focus on the area around it.
    I tried to take a picture of what I can see and have attached it and tried to outline the issue (it was taken on my iPhone 4 so apologises for the poor quality). It gets even more curious when I found out that this outline is in the same place and size for the Google Chrome app to fit perfectly inside of it and I think the plug may have come out while this app was displayed, but I doubt that would cause this issue.
    I have tried Googling this issue but cannot find a single issue to match - I don't really want to go to Apple and explain the issue because they messed up my iPad 2 replacement (even though I now have Apple Care). This issue doesn't ruin my latest Apple purchase because it's a beautiful thing, it would just be nice to know the cause of this issue and if there's a solution.
    I would appreciate any advice, help or history on this issue. Thank you, Steven.
    iMac Specifications
    27" iMac (Mid 2011)
    Mac OS X Lion (Pre-installed with Snow Leopard)
    3.1Ghz QuadCore CPU
    4GB DDR3 RAM
    AMD Radeon HD 6970M (1GB)
    1TB HD

    Hi Stevey,
    Your issue sounds quite similar to mine, except mine fades away again after a short while.
    Please see my thread on my issue here: https://discussions.apple.com/message/16260001#16260001
    Best,
    Luke

  • Nightmares? Using large outlines can give you some....

    Hi guys,
    System info: Essbase 11.1.1.1, Essbase servers Solaris (good HW), admin servers partly on windows and also on Solaris, ASO
    in our current project we are building some really large cubes and while we face countless obstacles, we managed to get along pretty ok so far. One thing giving us
    a severe headache recently is the way Essbase treats large outlines. In most cubes we have a single huge dimension containing up to 4-8 million members (Outline size ~ 1GB). While it is possible to load that dimension (<1 hour), it is almost impossible to modify that ouline after that.
    1. Outline wont load into EAS for further editing (not enough memory). We tried a lot of things making this somehow better playing with memory parmeters etc. but the general problem still persists.
    2. when trying to manipulate that outline with rulefiles (e.g. to update it) we face huge waiting times (hours +, much longer than the original loading time) even when adding single members to another dimension with just 10 members.
    3. Next we wrote a rulefile (since outline wont open in EAS...) to clear the large dimension (removed unspecified members). The dimension was cleard successfully but the outline is still 1GB large. OK we think, Essbase just keeps the space for future use, but when we continue to work with the outline (now containing just 20 members allover) we watch Essbase rewriting the full 1 GB (.otl<->.otn) when adding a member. Now we look at an outline with 20 members, add the 21th and wait minutes.
    4. OK lets compact it we thought and tried. Essbase "compacted" the outline from 1GB to 800MB. Now we look at a 20 member outline using up 800MB taking minute sto open.
    Time for a sanity check pls. Has anyone out there witnessed anything like this? Any help is greatly appreciated)
    TIA
    Edited by: user649142 on 16.07.2009 11:50

    So the ASO outline fragmentation issue is still there.
    This has been an issue since 7.1.5 (or was it .3, whatever).
    Take a look at this thread -- lots of good suggestions: Re: ASO too large
    Regards,
    Cameron Lackpour

  • Solutions to Some DNS, OD, AFP, CalDAV, AFP, and Spotlight Issues

    I recently upgraded our aging Xserve (1.3GHz G4, yeah baby!) to Leopard Server from Tiger Server so everything in the office would all be on the same OS. This server hosts all our in-use files via an Xserve RAID, and our dead files are on the internal 3-disk striped array. It's also the Open Directory master and hosts the office's DNS (I wanted to put OD and DNS on the new Xeon Xserve that hosts our FileMaker database, Retrospect backup, and our Squid web proxy, but something in its DNS configuration is broken and I gave up on that since OD and DNS don't really put any additional stress on the G4). Anyway, with all the hoopla of configuring, reconfiguring, and fixing, I've learned some things that may help others.
    *DNS, Open Directory, and AFP*
    I had some trouble with groups and ACL permissions, inability to get CalDAV working, and general strangeness with OD and Workgroup Manager. Demoting the server from an OD master to standalone took care of most of these. Part of the problem was an incorrect LDAP search base, which can only be corrected by blowing away the OD master and making sure DNS is set up properly. We only have about 20 users (we don't host network homes or anything like that), so when I did the demotion I just let it destroy all the accounts, and after promoting the server back to an OD master, I recreated the users and groups from scratch. So with freshly created users and groups, and after resetting the ACL's and propagating permissions on the network shares, that cleared up the permissions problems. The corrected LDAP search base fixed the Directory application too, which wasn't showing any contacts before, and it got Kerberos working as well.
    iCal/CalDAV
    All this work also got CalDAV/iCal calendar sharing running, and when I enable calendaring for a user, it stays enabled in WGM. Before, whenever I'd switch to another user and come back, calendaring would be turned off in WGM, although it was in fact still enabled. I haven't tested calendaring much yet, and adding an account in iCal is still a bit flakey. Our DNS is just internal, so in Server Admin I un-checked "fully-qualified" for our few DNS hostnames. If I mark the server's DNS hostname as fully-qualified, auto-discovery of the address in iCal won't work. iCal rejects my passwords if Kerberos authentication is used in either case, even if I manually point it to the IP address, but it connects fine without Kerberos.
    *Spotlight and AFP*
    Another problem I had after upgrading the server was stale spotlight searches. I used Server Admin to turn spotlight searching on and off for the two shares, and I tried any number of mdutil commands and System Preferences "privacy" settings to turn indexing on and off and to rebuild the indexes. With the old machine and about a terabyte of data, indexing would take all night, so I couldn't really try a lot of things. Every time the index was rebuilt, it would propagate out to the office just fine, but it would never update from then on. The solution to that was changing the permissions on the volumes the shares are on. The shares themselves had the correct permissions and ACL's, but the volumes need their POSIX permissions set to:
    owner: root: read/write/execute
    group: admin: read/write/execute
    everyone: read/execute
    Over the years those permissions had been changed (this server started out with OS X 10.2 Server btw, so there's been plenty of time for things to get b0rked), but Tiger Server apparently didn't care. Another thing I did (although I'm not sure if this was necessary) was to change the "Others" POSIX permission from None to Read Only. Once all that was changed, mdworker started chugging along to keep the spotlight index updated. However, it went nuts after the 10.5.6 Server update, constantly working with no sign of ever finishing. The update notes do make specific mention to Spotlight changes, which says you have to disable spotlight indexing for any shares in Server Admin, then re-enabled it to "take advantage of the new features." That started another night of indexing, but it's now done and updating properly. I noticed that a new inherited ACL for the user "Spotlight" showed up at the root level of each share point. I'm not going to touch that.
    I'll admit that I hate spotlight's interface and lack of control in Leopard (i.e. it always resets your search parameters, you can't change the results window's columns, and you have to already be in the folder you want to search, etc.). That being said, I can search for anything on the server and it finds the results almost instantly. Even a search that returns "more than 10,000" results only takes about 5 seconds. With Tiger or Panther server, ANY search would take several minutes and grind the server to a halt, making anyone else who tried to save a file or navigate the shared volumes get the spinning beach ball.
    Hopefully this will be of help to someone.

    Hi.
    You've not outlined your issues with AFP per se, having any ?
    DNS is critical for OS X Server, it's appropriately finicky about working forward & reverse DNS lookup for its FQDN.
    Certainly, Leopard Server may make assumptions contrary to your intent, if using the non-advanced setups, as it will attempt to use DNS and if not available, this may result in settings other than you desire.
    By default, hostnames entered in the Server Admin DNS settings, will be considered as part of the DNS zone you're editing.
    So:
    server
    would be for: server.yourfqdn.com
    If you mark that as fully qualified, well, then it's looking for: server
    which is not a FQDN
    As well, I believe Apple states it should no longer be necessary, but if you do need to change the hostname for your OD master, it is often possible via the Termina/command-line via:
    (sudo) changeip
    http://developer.apple.com/documentation/Darwin/Reference/Manpages/man8/changeip .8.html

  • Adobe Forum Log-in Issues?

    If you attempted to sign-in to the Adobe Forums, on Saturday, Nov. 23, 2013, and could only access this forum as a "Guest," then the Sign-In process is now working again. You no longer have to read as a Guest, and can sign-in, as of today, Sunday, Nov. 24, 2013.
    Sorry for the inconvenience.
    Hunt

    Was it your fault
    When I first encountered the issue, I thought that it might be. Then, I tested with four different browsers, all on the PC, with the same exact results.
    I got two e-mails from Adobe, and the first outlined the issue. The second notified me, that it had been fixed.
    As one COULD read the Adobe Forums, as a Guest, I posted, of the problem, to let any Guests know that things HAD been fixed (through no effort on my part), and that they could then Sign-In, as per normal.
    Still, if it makes others feel better, I can take the blame...
    Hunt

  • CertView issues, SQL Expert logo not available

    I was checking CertView and the logo for the Oracle Database SQL Certified Expert doesn't seem to be available, the link is broken or something to that effect. Also it seems there's some errors as there's duplicated rows.
    Anyone else having issues in there?
    Edited by: fsitja on Nov 3, 2009 9:43 PM

    Hi All,
    If you are having issues such as these with CertView, please outline these issues in as much detail as possible and send to [email protected] Please include your Oracle Candidate ID as well as your SSO user name. This way, we will be able to determine if there is an issue candidate by candidate, or if there is a bigger issue that we need to take back to the technical team.
    Thanks to everyone for your patience on CertView as we work out unexpected kinks. We are working very diligently to get these issues sorted out and ensure that CertView operates as expected.
    Regarding tests at Pearson VUE not showing up in CertView, due to a technical issue, there was a delay in data that was coming to us from Pearson VUE getting uploaded into the database. At this time, we are told that Pearson VUE data up to October 22 should now be included in CertView. The remaining data is still pending. This issue has also affected fulfillment, so if you are expecting a certificate from an exam or exams taken up til now at Pearson VUE, your certificate has been delayed. We expect all kits for certifications completed thru October 22 to be sent by the end of next week. We hope to receive another wave of fulfillment data next week as well to get us caught up.
    Please keep in mind that the transition from Prometric to Pearson VUE was a HUGE transition and we are diligently working out the kinks. Also, during this transition, Oracle released CertView which has had its own set of challenges. We understand that both of these projects have also been challenging for our customers and we appreciate your patience and understanding as we work thru these issues and strive to make these process better!
    Regards,
    Brandye Barrington
    Certification Forum Moderator
    Certification Program Manager

  • 2GB OR NOT 2GB - FILE LIMITS IN ORACLE

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-11
    2GB OR NOT 2GB - FILE LIMITS IN ORACLE
    ======================================
    Introduction
    ~~~~~~~~~~~~
    This article describes "2Gb" issues. It gives information on why 2Gb
    is a magical number and outlines the issues you need to know about if
    you are considering using Oracle with files larger than 2Gb in size.
    It also
    looks at some other file related limits and issues.
    The article has a Unix bias as this is where most of the 2Gb issues
    arise but there is information relevant to other (non-unix)
    platforms.
    Articles giving port specific limits are listed in the last section.
    Topics covered include:
    Why is 2Gb a Special Number ?
    Why use 2Gb+ Datafiles ?
    Export and 2Gb
    SQL*Loader and 2Gb
    Oracle and other 2Gb issues
    Port Specific Information on "Large Files"
    Why is 2Gb a Special Number ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Many CPU's and system call interfaces (API's) in use today use a word
    size of 32 bits. This word size imposes limits on many operations.
    In many cases the standard API's for file operations use a 32-bit signed
    word to represent both file size and current position within a file (byte
    displacement). A 'signed' 32bit word uses the top most bit as a sign
    indicator leaving only 31 bits to represent the actual value (positive or
    negative). In hexadecimal the largest positive number that can be
    represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
    This is ONE less than 2Gb.
    Files of 2Gb or more are generally known as 'large files'. As one might
    expect problems can start to surface once you try to use the number
    2147483648 or higher in a 32bit environment. To overcome this problem
    recent versions of operating systems have defined new system calls which
    typically use 64-bit addressing for file sizes and offsets. Recent Oracle
    releases make use of these new interfaces but there are a number of issues
    one should be aware of before deciding to use 'large files'.
    What does this mean when using Oracle ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The 32bit issue affects Oracle in a number of ways. In order to use large
    files you need to have:
    1. An operating system that supports 2Gb+ files or raw devices
    2. An operating system which has an API to support I/O on 2Gb+ files
    3. A version of Oracle which uses this API
    Today most platforms support large files and have 64bit APIs for such
    files.
    Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs
    but the situation is very dependent on platform, operating system version
    and the Oracle version. In some cases 'large file' support is present by
    default, while in other cases a special patch may be required.
    At the time of writing there are some tools within Oracle which have not
    been updated to use the new API's, most notably tools like EXPORT and
    SQL*LOADER, but again the exact situation is platform and version specific.
    Why use 2Gb+ Datafiles ?
    ~~~~~~~~~~~~~~~~~~~~~~~~
    In this section we will try to summarise the advantages and disadvantages
    of using "large" files / devices for Oracle datafiles:
    Advantages of files larger than 2Gb:
    On most platforms Oracle7 supports up to 1022 datafiles.
    With files < 2Gb this limits the database size to less than 2044Gb.
    This is not an issue with Oracle8 which supports many more files.
    In reality the maximum database size would be less than 2044Gb due
    to maintaining separate data in separate tablespaces. Some of these
    may be much less than 2Gb in size.
    Less files to manage for smaller databases.
    Less file handle resources required
    Disadvantages of files larger than 2Gb:
    The unit of recovery is larger. A 2Gb file may take between 15 minutes
    and 1 hour to backup / restore depending on the backup media and
    disk speeds. An 8Gb file may take 4 times as long.
    Parallelism of backup / recovery operations may be impacted.
    There may be platform specific limitations - Eg: Asynchronous IO
    operations may be serialised above the 2Gb mark.
    As handling of files above 2Gb may need patches, special configuration
    etc.. there is an increased risk involved as opposed to smaller files.
    Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.
    Important points if using files >= 2Gb
    Check with the OS Vendor to determine if large files are supported
    and how to configure for them.
    Check with the OS Vendor what the maximum file size actually is.
    Check with Oracle support if any patches or limitations apply
    on your platform , OS version and Oracle version.
    Remember to check again if you are considering upgrading either
    Oracle or the OS in case any patches are required in the release
    you are moving to.
    Make sure any operating system limits are set correctly to allow
    access to large files for all users.
    Make sure any backup scripts can also cope with large files.
    Note that there is still a limit to the maximum file size you
    can use for datafiles above 2Gb in size. The exact limit depends
    on the DB_BLOCK_SIZE of the database and the platform. On most
    platforms (Unix, NT, VMS) the limit on file size is around
    4194302*DB_BLOCK_SIZE.
    Important notes generally
    Be careful when allowing files to automatically resize. It is
    sensible to always limit the MAXSIZE for AUTOEXTEND files to less
    than 2Gb if not using 'large files', and to a sensible limit
    otherwise. Note that due to <Bug:568232> it is possible to specify
    an value of MAXSIZE larger than Oracle can cope with which may
    result in internal errors after the resize occurs. (Errors
    typically include ORA-600 [3292])
    On many platforms Oracle datafiles have an additional header
    block at the start of the file so creating a file of 2Gb actually
    requires slightly more than 2Gb of disk space. On Unix platforms
    the additional header for datafiles is usually DB_BLOCK_SIZE bytes
    but may be larger when creating datafiles on raw devices.
    2Gb related Oracle Errors:
    These are a few of the errors which may occur when a 2Gb limit
    is present. They are not in any particular order.
    ORA-01119 Error in creating datafile xxxx
    ORA-27044 unable to write header block of file
    SVR4 Error: 22: Invalid argument
    ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
    ORA-27070 skgfdisp: async read/write failed
    ORA-02237 invalid file size
    KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
    file limit exceed.
    Unix error 27, EFBIG
    Export and 2Gb
    ~~~~~~~~~~~~~~
    2Gb Export File Size
    ~~~~~~~~~~~~~~~~~~~~
    At the time of writing most versions of export use the default file
    open API when creating an export file. This means that on many platforms
    it is impossible to export a file of 2Gb or larger to a file system file.
    There are several options available to overcome 2Gb file limits with
    export such as:
    - It is generally possible to write an export > 2Gb to a raw device.
    Obviously the raw device has to be large enough to fit the entire
    export into it.
    - By exporting to a named pipe (on Unix) one can compress, zip or
    split up the output.
    See: "Quick Reference to Exporting >2Gb on Unix" <Note:30528.1>
    - One can export to tape (on most platforms)
    See "Exporting to tape on Unix systems" <Note:30428.1>
    (This article also describes in detail how to export to
    a unix pipe, remote shell etc..)
    Other 2Gb Export Issues
    ~~~~~~~~~~~~~~~~~~~~~~~
    Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
    with EXPORT on many releases of Oracle such that if you export a large table
    and specify COMPRESS=Y then it is possible for the NEXT storage clause
    of the statement in the EXPORT file to contain a size above 2Gb. This
    will cause import to fail even if IGNORE=Y is specified at import time.
    This issue is reported in <Bug:708790> and is alerted in <Note:62436.1>
    An export will typically report errors like this when it hits a 2Gb
    limit:
    . . exporting table BIGEXPORT
    EXP-00015: error on row 10660 of table BIGEXPORT,
    column MYCOL, datatype 96
    EXP-00002: error in writing to export file
    EXP-00002: error in writing to export file
    EXP-00000: Export terminated unsuccessfully
    There is a secondary issue reported in <Bug:185855> which indicates that
    a full database export generates a CREATE TABLESPACE command with the
    file size specified in BYTES. If the filesize is above 2Gb this may
    cause an ORA-2237 error when attempting to create the file on IMPORT.
    This issue can be worked around be creating the tablespace prior to
    importing by specifying the file size in 'M' instead of in bytes.
    <Bug:490837> indicates a similar problem.
    Export to Tape
    ~~~~~~~~~~~~~~
    The VOLSIZE parameter for export is limited to values less that 4Gb.
    On some platforms may be only 2Gb.
    This is corrected in Oracle 8i. <Bug:490190> describes this problem.
    SQL*Loader and 2Gb
    ~~~~~~~~~~~~~~~~~~
    Typically SQL*Loader will error when it attempts to open an input
    file larger than 2Gb with an error of the form:
    SQL*Loader-500: Unable to open file (bigfile.dat)
    SVR4 Error: 79: Value too large for defined data type
    The examples in <Note:30528.1> can be modified to for use with SQL*Loader
    for large input data files.
    Oracle 8.0.6 provides large file support for discard and log files in
    SQL*Loader but the maximum input data file size still varies between
    platforms. See <Bug:948460> for details of the input file limit.
    <Bug:749600> covers the maximum discard file size.
    Oracle and other 2Gb issues
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    This sections lists miscellaneous 2Gb issues:
    - From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
    An extract from the 8.0.5 README file introduces these - see <Note:62252.1>
    - DBV (the database verification file program) may not be able to scan
    datafiles larger than 2Gb reporting "DBV-100".
    This is reported in <Bug:710888>
    - "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
    specified in 'M' or 'K' to create files larger than 2Gb otherwise the
    error "ORA-02237: invalid file size" is reported. This is documented
    in <Bug:185855>.
    - Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
    Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename>
    reports
    ORA-2187: invalid quota specification.
    This is documented in <Bug:425831>.
    The workaround is to grant users UNLIMITED TABLESPACE privilege if they
    need a quota above 2Gb.
    - Tools which spool output may error if the spool file reaches 2Gb in size.
    Eg: sqlplus spool output.
    - Certain 'core' functions in Oracle tools do not support large files -
    See <Bug:749600> which is fixed in Oracle 8.0.6 and 8.1.6.
    Note that this fix is NOT in Oracle 8.1.5 nor in any patch set.
    Even with this fix there may still be large file restrictions as not
    all code uses these 'core' functions.
    Note though that <Bug:749600> covers CORE functions - some areas of code
    may still have problems.
    Eg: CORE is not used for SQL*Loader input file I/O
    - The UTL_FILE package uses the 'core' functions mentioned above and so is
    limited by 2Gb restrictions Oracle releases which do not contain this fix.
    <Package:UTL_FILE> is a PL/SQL package which allows file IO from within
    PL/SQL.
    Port Specific Information on "Large Files"
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Below are references to information on large file support for specific
    platforms. Although every effort is made to keep the information in
    these articles up-to-date it is still advisable to carefully test any
    operation which reads or writes from / to large files:
    Platform See
    ~~~~~~~~ ~~~
    AIX (RS6000 / SP) <Note:60888.1>
    HP <Note:62407.1>
    Digital Unix <Note:62426.1>
    Sequent PTX <Note:62415.1>
    Sun Solaris <Note:62409.1>
    Windows NT Maximum 4Gb files on FAT
    Theoretical 16Tb on NTFS
    ** See <Note:67421.1> before using large files
    on NT with Oracle8
    *2 There is a problem with DBVERIFY on 8.1.6
    See <Bug:1372172>

    I'm not aware of a packaged PL/SQL solution for this in Oracle 8.1.7.3 - however it is very easy to create such a program...
    Step 1
    Write a simple Java program like the one listed:
    import java.io.File;
    public class fileCheckUtl {
    public static int fileExists(String FileName) {
    File x = new File(FileName);
    if (x.exists())
    return 1;
    else return 0;
    public static void main (String args[]) {
    fileCheckUtl f = new fileCheckUtl();
    int i;
    i = f.fileExists(args[0]);
    System.out.println(i);
    Step 2 Load this into the Oracle data using LoadJava
    loadjava -verbose -resolve -user user/pw@db fileCheckUtl.java
    The output should be something like this:
    creating : source fileCheckUtl
    loading : source fileCheckUtl
    creating : fileCheckUtl
    resolving: source fileCheckUtl
    Step 3 - Create a PL/SQL wrapper for the Java Class:
    CREATE OR REPLACE FUNCTION FILE_CHECK_UTL (file_name IN VARCHAR2) RETURN NUMBER AS
    LANGUAGE JAVA
    NAME 'fileCheckUtl.fileExists(java.lang.String) return int';
    Step 4 Test it:
    SQL> select file_check_utl('f:\myjava\fileCheckUtl.java') from dual
    2 /
    FILE_CHECK_UTL('F:\MYJAVA\FILECHECKUTL.JAVA')
    1

Maybe you are looking for

  • How to develop dll towards VB and Delphi for vi's of a third-part usb device

    Hello, dear lv'ers: Recently i am evolved in a project where i want to reuse my developed ac measurement modules (certainly in .vi format) in target computers. The related information has been collected below: 0) Develope machine: win7, lv2010 sp1 de

  • XML to Flat File DB Table So I and Others Can Use SSRS for Reporting

    I'm using a program called Orbeon Forms to quickly create and distribute forms on the local network. The program just recently got updated to Version 4.6 so as to use SQL Server DB as a backend. (I was using MySQL in a lower versions) The data create

  • Linux 5 64 bit ODBC connectivity with MS SQL Server

    Dear All, Env. Oracle EBS R12 – DB 10gR2 64bit on Redhat linux 5.4 64bit We want to connect OracleDB10gR2 64bit to MS sql Server 2000 on Windows 2003. Is there any ODBC driver is available for Linux 64 bit Regards

  • Photoshop Elements Editor won't open

    I have been using Photoshop Elements since July without any problems.  Now the Oraganizer will open but the editor will not. How can I fix this?

  • Gaming X20 mouse - Right button issue

    Hello, i'm having a problem with a Gaming X20 mouse and I can't find it anywhere on the web, as well as new drivers for the mouse. The problem is when, in some games, i press the right mouse button, the view goes crazy, in some games it's subtle, but