Hardening & Keeping Log files in 10.9

I'm not in IT but I'm trying to Harden our Macs to please a client.  I found several Hardening Tips & Guides written for older versions of OS X, but none for 10.9.  Does anyone know of a Hardening Guide written with commands 10.9.
Right now I found a guide written for 10.8 and have been mostly sucessful implementing it except for a couple sticking points.
They suggested keeping security.log files for 30 days, I found out that they got rid of security.log and most of its functionality is in authd.log.  But I can't figure out how to keep authd logs for 30 days.  Does anyone know how I can set this?
I also need to keep install.log for 30 days as well, but not seeing a way to control this in /etc/newsyslog.conf.  Anyone know how to set this as well.
Does anyone know if the following audit flags should still work: lo,ad,fd,fm,-all?
I'm trying to keep system.log & appfirewall.log for 30 days as well, I've figured out these have moved from /etc/newsyslog.conf to etc/asl.conf, but I'm not sure if I've set this correctly. Right now I have added "store_ttl=30" to these 2 lines asl.conf.  Should this work? Is there a better way to do this?
          > system.log mode=0640 format=bsd rotate=seq compress file_max=5M all_max=100M store_ttl=30
          ? [= Facility com.apple.alf.logging] file appfirewall.log file_max=5M all_max=100M store_ttl=30

Hi Alex...
Jim,
who came up with this solution????
I got these solutions for creating log files and reconstructing the database from this forum a while back....probably last year sometime.
Up until recently after doing this, there has been
no
problem - server runs as it should.
I dare to say pure luck.
The reason I do
this is because if I don't, the server does NOT
automatically create new empty .log files, and
when
it fills the current log file, it "crashes" with
the
"unkown mailbox path" displayed for all mailboxes.
I would think you some fundamental underlying issue
there.
I assume by "unkown mailbox path" problem you mean a
corrupt cyrus database?
Yes, I believe that db corruption is the case...
You should never ever manually modify anthing inside
cyrus' configuration database. This is just a
desaster waiting to happen.
If your database gets regularly corrupted, we need to
investigate why. Many possible reasons: related
processes crashing, disk failure, power
failure/surges and so on.
Aha!...about a month ago - thinking back to when this problem started - there was a power outage here, over a weekend! The hard drive was "kicked out" of the server box when I returned to work on that Monday....and that's when this problem started!
I suggest you increase the logging level for a few
days and keep an eye on things. Then post log
extracts and /etc/imapd.conf and we'll take it from
there.
Alex
Ok, thanks, will do!
P.S. Download mailbfr from here:
http://osx.topicdesk.com/downloads/
This will allow you to easily rebuild if needed and
most important to do proper backups of your mail
services.
Thanks for that, too. I will check it out and return to this forum with an update in the near future.
Jim
Mac OS X (10.3.9)

Similar Messages

  • Where to keep the log file and the mdf/ndf

    Hi ,
    While going through the book of Kalen Delaney "inside sql server"  I got statement  "A general recommendation is to put log files onto their own physical disk. Here I am bit confuse that we have to keep log file in same hard
    disk separate derive or in separate hard disk itself. On My DB server has only one hard disk, with three logical partition, so if I keep my log file MDF on different derives will I get the performance.
    Thanks in advance for reply........
    Regards Vikas Pathak

    @sateesh : but as per link "http://msdn.microsoft.com/en-us/library/bb402876.aspx"
    This rule checks whether data and log files are placed on separate logical drives. Placing both data and log files on the same device can cause contention for that device and result in poor performance. Placing the files on separate drives allows the I/O activity
    to occur at the same time for both the data and log files.
    This is probably as this is the best they can do. Checking that the logical drives are actually two different physical disks is little out of scope from what you can do in SQL Server.
    It is also worth pointing that the rule specifically refers to spinning disks. SSD disks do not have this problem.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • MR11 log file

    Hi,
    While run MR11 for GR/IR clearing account a log file is generated and document number is 5400000010. Where this log file is stored by default and how it should display ? Beside F.13 automatic clearing  clears those GR/IR records whose balance shows as 0. And difference amount is cleared through F-03 by choosing document number from GR and IR under same purchase order. In F.13(automatic clearing) does not clear those same GR value and IR value inspite of same PO number. These values are easily tracebale from normal balance viewing mode through FBL3N. Why these values are not cleared through F.13 ?
    Regards,
    Samrat

    Immediate AI:
    0. Check the log file auto growth setup too and check is this a practically a good one and disk has still space or not.
    1. If disk is full where you are keeping log file, then add a log file in database property page on another disk where you have planned to keep log files, in case you can't afford to get db down. Once you are done then you can plan to truncate data out of
    log file and remove that if it has come just first time issues. If this happens now and then check for capacity part.
    2. You can consider shrinking  the log files after no any other backup are going on or any maintenance job like rebuild\reorg indexes \update stats jobs are executing as this will be blocking it.
    If db size is small and copy files from prod to dr is not that latency prone, and shrink is not happening, then you can try changing recovery model and then do shrinking and reconfigure log-shipping after reverting recovery model.
    3. Even you can check if anyone mistakenly places some old files and forgot to remove them which is causing disk full issues. Also
    4. For permanent solution, do monitor the environment for capacity and allocate good space for log file disks. Also consider tweaking frequencies of the log backup from default that suits your environment.
    Santosh Singh

  • Help! SQL server database log file increasing enormously

    I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of
    disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log
    data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach
    to cleanup the log.
    FYI: All the SSIS packages in the job is using Transaction on
    some tasks. for eg. Sequence Cointainer
    I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues. Thanks in advance.

    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues.
    For SSIS part of question it would be better if you ask in SSIS forum although noting is going to change about logging behavior. You can increase some space on log file and also should batch your transactions as already suggested
    Regarding memory question about SQL Server, once it utilizes memory is not going to release unless there is windows OS faces memory pressure and SQLOS asks SQL Server to trim down its memory consumption. So again if you have set max server memory to some
    where near 50 SQL Server will utilize that much memory eventually. So what you are seeing is totally normal. Remember its costtly task for SQL Server to release and take memory so it avoids it by caching as much as possible plus it caches more so as to avoid
    physical reads which are costly
    When log file is getting full what does below transaction return
    select log_reuse_wait_desc from sys.databases where name='db_name'
    Can you manually introduce chekpoint in ETL query. Try this it might help you
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • SQL Server 2012 Reorg Index Job Blew up the Log File

    We have a maintenance plan that nightly (1) runs dbcc checkdb on all databases, (2) reorgs indexes on all databases, compacting large objects, (3) updates statistics, etc. There are three user databases, one large, one medium, one small. Usually it uses
    a little more than 80% of the medium database's log, set to 6,700 MB. Last night the reorg index step caused the log to increase to almost 14,000 MB and then blew up because the maximum file size was set to 14,000 MB, one of the alter index commands failed
    because it ran out of log space. (Dbcc checkdb step ran successfully.) Anyone have any idea what might cause this? There is one update process on this database, it runs at 3 AM. The maintenance plan runs at 9 PM and completes by 1 AM. The medium database has
    a 21,000 MB data file, reserved space is at about 10 GB. This is a SQL 2012 Standard SP 2 running on Windows 2012 Server Standard.

    I personally like to shrink the log files once the indexes have been rebuilt and before switching back to full recovery, because as I'm going to take a full backup afterwards, having a small log file reduces the size of the backup.
    Do you grow them afterwards, or do you let the application waste time on that during peak hours?
    I have not checked, but I see no reason why the backup size would depend on the size of the log file - it's the data in the data file you back up, not the log file.
    I would say this is highly dubious.
    Erland Sommarskog, SQL Server MVP, [email protected]
    Yeah I let the application allegedly "waste" a few milisseconds a day autogrowing the log file. Common, how long do you think it takes for a log file to grow a few GB on most storage systems nowadays? As long as you set an appropriate autogrow
    interval so your log file doesn't get too fragmented (full of VLFs), you'll be perfectly fine in most situations.
    Lets say you have a logical disk dedicated to log file storage, but it is shared across multiple databases within the instance. Having allocated space for the log files means there will be not much free space left in the disk in case ANY database needs more
    space than the others due to a peak in transactional workload, even though other databases have unused space that could have been used.
    What if this same disk, for some reason, is also used to store the tempdb log file? Then all applications will become unstable.
    These are the main reasons I don't recommend people blindly crucify keeping log files small when possible. I know there are many people who disagree and I'm aware of their reasons. Maybe we just had different experiences about this subject. Maybe people
    just haven't been through the nightmare of having a corrupted system database or a crashed instance because of insuficient log space in the middle of the day.
    And you are right about the size of the backup, I didn't put it correctly. It isn't the size of the backup that gets smaller (although the backup operation will run faster, having tested this myself), but the benefit from backing up a database with a small
    log file is that you won't need the extra space to restore it in a different environment such as a BI or DEV server, where recuperability doesn't matter and the database will be on simple recovery mode.
    Restoring the database will also be faster.
    Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
    is an undocumented behavior and should not be relied upon.

  • Streaming log file analyzer?

    Hi there,
    We host several flv files with Akamai and recently enabled
    the log file delivery service. So, we now have Akamai-generated log
    files in W3C format. I was assuming I could use WebTrends to
    analyze these, but after looking at them briefly, it shows
    different events like play, stop, seek, etc., and I don't know that
    WebTrends would be able to process all of that.
    Our most basic requirement is to see how many times each
    video was viewed. If we could get more detailed analysis, like
    video X gets viewed on average for 2:00, but video Y only gets
    viewed for 20 seconds, that would be great as well.
    Does anyone have any suggestions for the best software to
    analyze these files?
    Thanks,
    Matt

    Immediate AI:
    0. Check the log file auto growth setup too and check is this a practically a good one and disk has still space or not.
    1. If disk is full where you are keeping log file, then add a log file in database property page on another disk where you have planned to keep log files, in case you can't afford to get db down. Once you are done then you can plan to truncate data out of
    log file and remove that if it has come just first time issues. If this happens now and then check for capacity part.
    2. You can consider shrinking  the log files after no any other backup are going on or any maintenance job like rebuild\reorg indexes \update stats jobs are executing as this will be blocking it.
    If db size is small and copy files from prod to dr is not that latency prone, and shrink is not happening, then you can try changing recovery model and then do shrinking and reconfigure log-shipping after reverting recovery model.
    3. Even you can check if anyone mistakenly places some old files and forgot to remove them which is causing disk full issues. Also
    4. For permanent solution, do monitor the environment for capacity and allocate good space for log file disks. Also consider tweaking frequencies of the log backup from default that suits your environment.
    Santosh Singh

  • XMLForm Service message keep showing in the log file

    Hello,
    The message below keeps showing in the log file when the form is opening in Workspace ES. It keeps repeating like hundred times in the log file when the form is launching.
    It just happen to some forms but not to all forms. I tried to see if there is any thing strange in the form but could not find any thing wrong in the form nor any explaination about this error on Adobe site or Internet. Note that I am still using LiveCycle ES 8.2.1 with SP3.
    =============
    [10/18/11 13:46:22:644 CDT] 0000003e XMLFormServic W com.adobe.service.ProcessResource$ManagerImpl log ALC-XTG-102-001: [1712] Bad value: 'designer__defaultHyphenation.para.hyphenation' of the 'use' attribute of 'hyphenation' element ''. Default will be used instead.
    ============
    Can any one please advise.
    Thanks in advance,
    Han

    Yes,
    Upgrade it to SP1 with hot-fix. SUN has a very big bug in the 4.16 SP1 software . u have to apply that hot-fix otherwise you will in big bug in userpassword attribute and some other security issues.

  • Oracle standby/redo log file shipping keeps needing logs re-registering

    Hi
    We have Log File Shipping enabled and the prod system ships redo logs over to the LFS server. It's kept 24 hours behind. It usually ships the logs (and I believe automatically registers them) without issue.
    EXCEPT - it keeps complaining about missing redo log files.
    The file is usually there; but just needs registering with:
    alter database register or replace logfile '/oracle/S1P/saparch/S1Parch1_636443_654987192.dbf';
    (we found if we left out the 'or replace' it takes a very long time or even hangs)
    It then plods on and applies the next... can go for another 2 or 3... or 20... but then often gets stuck again, and you need to register the next.
    Can spend whole days on this...!!
    We did try running a script to register the next 1365 redo logs! It failed on 4, so I ran it again... it worked on those 4, but turned up 3 others it had worked with before! HUH?!? So manually did those 3 ... fine... it carried on rolling forward... but got stuck after 10 minutes again when it hit another it reckoned needed registering (we'd already done it twice!!).
    Any ideas?
    Ross

    Hi
    We have Log File Shipping enabled and the prod system ships redo logs over to the LFS server. It's kept 24 hours behind. It usually ships the logs (and I believe automatically registers them) without issue.
    EXCEPT - it keeps complaining about missing redo log files.
    The file is usually there; but just needs registering with:
    alter database register or replace logfile '/oracle/S1P/saparch/S1Parch1_636443_654987192.dbf';
    (we found if we left out the 'or replace' it takes a very long time or even hangs)
    It then plods on and applies the next... can go for another 2 or 3... or 20... but then often gets stuck again, and you need to register the next.
    Can spend whole days on this...!!
    We did try running a script to register the next 1365 redo logs! It failed on 4, so I ran it again... it worked on those 4, but turned up 3 others it had worked with before! HUH?!? So manually did those 3 ... fine... it carried on rolling forward... but got stuck after 10 minutes again when it hit another it reckoned needed registering (we'd already done it twice!!).
    Any ideas?
    Ross

  • Corrupt log file, but how does db keep working?

    We recently had a fairly devastating outage involving a hard drive failure, but are a little mystified about the mechanics of what went on with berkeleydb which I hope someone here can clear up.
    A hard drive running a production instance failed because of a disk error, and we had to do a hard reboot to get the system to come back up and right itself (we are running RedHat Enterprise). We actually had three production environments running on that machine, and two came back just fine, but in one, we would get this during recovery:
    BDBStorage> Running recovery.
    BerkeleyDB> : Log file corrupt at LSN: [4906][8294478]
    BerkeleyDB> : PANIC: Invalid argument
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__os_stack+0x20) [0x2c23af2380]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__os_abort+0x15) [0x2c23aee9c9]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_panic+0xef) [0x2c23a796f9]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_attach_regions+0x788) [0x2c23aae82c]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_open+0x130) [0x2c23aad1e7]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_open_pp+0x2e7) [0x2c23aad0af]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so [0x2c23949dc7]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(Java_com_sleepycat_db_internal_db_1javaJNI_DbEnv_1open+0xbc) [0x2c239526ea]
    BerkeleyDB> : [0x2a99596e77]
    We thought, well, perhaps this is related to the disk error, it corrupted a log file and then died. Luckily (or so we thought) we diligently do backups twice a day, and keep a week's worth around. These are made using the standard backup procedure described in the developer's guide, and whenever we've had to restore them, they have been just fine (we've been using our basic setup for something like 9 years now). However, as we retrieved backup after backup, going back three or four days, they all had similar errors, always starting with [4096]. Then we noticed an odd log file, numbered with 4096, which sat around in our logs directory ever since it was created. Eventually we found a good backup, but the customer lost several days' worth of work.
    My question here is, how could a log file be corrupted for days and days but not be noticed, say during a checkpoint (which we run every minute or so)? Doesn't a checkpoint itself basically scan the logs, and shouldn't that have hit the corrupt part not long after it was written? The system was running without incident, getting fairly heavy use, so it really mystifies me as to how that issue could be sitting around for days and days like that.
    For now all we can promise the customer is that we will automatically restore every backup as soon as it's made, and if something like this happens, we immediately try a graceful shutdown, and if that doesn't come back up, we automatically go back to the 12-hour-old backup. And perhaps we should be doing that anyway, but still, I would like to understand what happened here. Any ideas?

    Please note, I don't want to make it sound like I'm somehow blaming berkeleydb for the outage-- we realize in hindsight there were better things to do than go back to an old backup, but the customer wanted an immediate answer, even if it was suboptimal. I just feel like I am missing something major about how the system works.

  • Alert SID .log file size too big ,How to keep it under control

    alert<SID>.log file size too big ,How to keep it under control?
    -rw-r--r-- 1 oracle dba 182032983 Aug 29 07:14 alert_g54nha.log

    Metalink Note:296354.1

  • What log files should I keep an eye on?

    I want to make sure there are no problems with my system, so I want to do some preemptive troubleshooting.
    Which log files are the most important ones to look at?
    Is there a log for the main boot? For example on that shows how all the DAEMONS and MODULES have started?

    check out the syslog-ng package, it should already be installed.  set up is in /etc/syslog-ng.conf, you can tell it what to monitor, etc.
    i prefer to keep two konsole tabs open with "tail -f " running on each, following
    ~/.xsession-errors [shows kde/X errors]
    /var/log/all  [which i set up on syslog, to catch all errors/messages into one file]
    dmesg is the one that shows you what the kernel does during bootup, and /var/log/Xorg.0.log shows what X does on startup. you can find all your current open log files in /var/log (that's the default system location i believe).  and of course individual programs can also have their debug info going into outer space (ex. open firefox in console, and you'll see plenty of debugging info, mine used to end up in .xsession-errors, but i needed to have firefox's debug/log info separate so i changed the behavior)
    there is also a multi-tail command somewhere that lets you follow several files at once on one screen.  i havent used it though.
    Last edited by toxygen (2009-10-11 02:54:06)

  • Exchange 2010 Log files keep filling up following migration from Exchange 2003

    I am migrating from Exchange server 2003 to 2010.
    Having only moved one mailbox and set-up Public Folder replication, I noticed that the 20GB Drive allocated for the logs are filling up the entire drive, even before I have time to run my scheduled backup.
    As a temporary measure, I have enable circular logging as a workaround.
    Q/ Whilst this is not ideal, shall I leave it like this until the Public Folders are fully replicated and all mailboxes moved over?
    Q/ What risk am I exposing myself to as a result of using Circular logging (I am running full backups every night).
    Q/ Could there be another cause as to why the log files would grow so quick in such a short amount of time?

    Hello,
    Remember that logs are truncated after successful backups so if you have a lot of data replicated between backups, a lot of logs will be stored on disk.
    "Q/ Whilst this is not ideal, shall I leave it like this until the Public Folders are fully replicated and all mailboxes moved over?"
    The best option is to run backups to truncate logs. Circular Logging is useful in test and high available deployments as it can cause data loss (loss of data created between backups).
    "Q/ What risk am I exposing myself to as a result of using Circular logging (I am running full backups every night)."
    The same as mentioned above.
    "Q/ Could there be another cause as to why the log files would grow so quick in such a short amount of time?"
    Enormous logs generation can be caused by bugs in Exchange and connecting devices, i.e. iPhones can cause quick logs creation in some configurations. If you have latest Exchange 2010 build, it shouldn't be problem.
    Hope it helps,
    Adam
    CodeTwo: Software solutions for Exchange and Office 365
    If this post helps resolve your issue, please click the "Mark as Answer" or "Helpful" button at the top of this message. By marking a post as Answered, or Helpful you help others find the answer faster.

  • Block corruption error keep on repeating in alert log file

    Hi,
    Oracle version : 9.2.0.8.0
    os : sun soalris
    error in alert log file:
    Errors in file /u01/app/oracle/admin/qtrain/bdump/qtrain_smon_24925.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01578: ORACLE data block corrupted (file # 1, block # 19750)
    ORA-01110: data file 1: '/u01/app/oracle/admin/qtrain/dbfiles/system.dbf'system datafile is restored from backup still the error is logged in alert log file
    Inputs are appreciated.
    Thanks
    Prakash

    Hi,
    Thanks for the inputs
    OWNER                          SEGMENT_NAME                                                                      PARTITION_NAME                 SEGMENT_TYPE       TABLESPACE_NAME                 EXTENT_ID    FILE_ID   BLOCK_ID      BYTES     BLOCKS RELATIVE_FNO
    SYS                            SMON_SCN_TO_TIME                                                                                                 CLUSTER            SYSTEM                                  1          1      19749      16384          1            1
    SYS                            SMON_SCN_TO_TIME                                                                                                 CLUSTER            SYSTEM                                  2          1      19750      32768          2            1Thanks
    Prakash

  • Log files for time keeping?

    I'm interested in a log file that shows me when I log-in and log-out so I know when I start and stop working on certain projects.  Can I get that info from the "skypedebug-xxxxxxxx-xxxx.log", etc. files?

    By default, it gives not an log file to see the activity at an specific time. What you want is an process accounting. These can you activate only in the Terminal.

  • How to keep the old jdbc log file

    Hi,
    whenever I restart the wls8.1, the jdbc logs are rewritten. I could not find my
    previous log to debug anymore, how can i config to make write to a new file like
    wls log. Thanks

    Hi Jen!
    This is bug in 8.1ga and sp1 and it is fixed ib 81sp2.
    When you specify jdbc log name with ".log" suffix, jdbc logging doesn't
    rotate as expected. So, the prior log file would be overwrited by new jdbc
    log when you restart WLS.
    Workaround
    Remove ".log" from the file name. It would be added by WLS automatically.
    Thanks,
    Mitesh
    Jen wrote:
    Hi,
    whenever I restart the wls8.1, the jdbc logs are rewritten. I could not find my
    previous log to debug anymore, how can i config to make write to a new file like
    wls log. Thanks

Maybe you are looking for

  • Printer is connected. How to get rid of installation window upon boot-up?

    I have an Officejet 4500; successfully connected to desktop (Windows Ultimate Vista OP). However, had a problem when connecting to laptop (Windows XP). When getting to configuration step, had a fatal error.  Folllowing instructions in auto-dowloaded

  • Changing File Adapter target directory - container bounce required?

    I edited USPSshipment.wsdl in the FulfillmentESB project to reflect my desired target directory (/home/oracle/usps_ship instead of C:\temp) and then re-registered the project with ESB. The registration completed normally, and I verified that the edit

  • Install badge fails with '@' or '.' in appinstallarg

    I'm trying to run the AIR install badge with my application and I would like to pass down an varible (in my case, an email address) to be read when the app is first run.  However, whenever I have the '@' or the '.' character as part of the appinstall

  • Install Notification requests closing "SafariNotificati"

    I am trying to install Adobe Acrobat from the Creative Cloud onto my Mac system along with my other existing CC products. When I get to 42% downloaded, a window opens that says, "Please close the following applications to continue: SafariNotificati".

  • WKT and Three Dimensional Coordinates?

    It seems that the WKT support in Oracle Spatial doesn't support 3 dimensional points. Is that true? If so, why not? Looking at the OGC spec, it appears that it only has allowances for 2 dimensions. But other DBs, like SQLPlus, have extended the spec