Data/database availability between ETL process

Hi
I am not sure whether this is right forum to ask this question. But still I am requesting for help.
We have a DSS database of size 1Tb. The batch process runs between 12:00 hours till 6:00 am. the presentation/reporting schema is of size 400 GB. Through the nightly batch job, most of the tables in the presentation layer get truncated/modified. Due to business nature and requirement, this presentation layer needs to be available 24*7. As the ETL process modify/changes database, hence the system is not available between 12:00 till 6 am.
The business requirement is that: Before ETL process starts, take a backup of the presentation layer, transfer the application to this backed-up area and then let the ETL process proceed. Once the ETL process finished, move the application to this latest area.
Based on the size of the database and schema size, does any one , how to take backup/restore the presentation layer in the same database in a efficient way.

There are a couple of possibilities here. You certainly need two sets of tables, one for loading and the other for presentation. You could use synonyms to control which one is the active reporting set and which is the ETL set, and switch them over at a particular time. Another approach would be to use partition exchange to exchange data and index segments between the the two table sets.
I would go for the former method myself.

Similar Messages

  • Database changes between data records of PSA

    Hi all,
    Does anybody know a way, how to flush or commit database change between processing each data record from PSA by transfer rules?
    I have some routines in transfer rules (updating attributes of master data from datasource), where I need values from exactly previously processed data record. But SELECT from /BIC/Q... table gives me no changes of these values
    Command of selection is written correctly, because it gives me new values in next run of DTP process.
    And little explanation, what I'm trying to achieve:
    In PSA  I have account, date and value and I need count, how this value changed for a concrete day and store this value to time-dependent attribute of master data. All data records with the same key (duplicate record) overwrite the result of previously one, so only last one is stored in master data.
    Shortly, I need something like aggregation ADDING of routine in transfer rule detail, but able to catch duplicate records.
    Thank you for any idea.
    Regards, Filip

    Some note about investigation process.
    I tried change my data flow and made the same transfer rules from data source to ODS, if it will behave in different way.
    I have found data records correctly collected (agregated) in New data of ODS, but EMPTY in active data! So I think, the something similar happend in case of updating master data, and that's why I didn't saw any agregation.
    There is no need to call-out database changes, but this problem is kind od anythink else
    This shoud be for a new thread, but how can keyfigures become empty after activation?? I've searched forum and found only a case, when End routines are implemented and they skip update rules, whitch are initial. This is not my case.
    Nice to hear from you any ideas. Thankx for reading.
    Filip

  • Today at 8am est my card was charged but when checking order status it saids expected ship date not available at this time and  it also saids we received your order and it is in process . payment not taken yet

    today at 8am est my card was charged but when checking order status it saids expected ship date not available at this time and  it also saids we received your order and it is in process . payment not taken yet 
    but card was charged full amount and i havent gotten anything else been checking for update all day and got nothing i did order iphone 6 plus at 3:01am and everything went through and i was finsihed at 3:06am

        chriss2633,
    I know how exciting it is to have all the information on the new phone. I did send you a Direct Message. Can you please respond back to that message? Looking forward to hearing back from you.
    KevinR_VZW
    Follow us on Twitter @VZWSupport

  • Data Not Available. Statspack data is not available for this database insta

    Hi,
    while trying to configure the 9i db on grid control I installed the statspack data while configuring it. But now when I try to see the historical data as part of the performace tab for the 9i db it complains as below
    Data Not Available. Statspack data is not available for this database instance. Make sure that Statspack is installed on the target instance.
    Please shed some of your inputs.
    thanks in advance
    PK

    Hi,
    I think it is not installed.
    Using 1097660794 for database Id
    Using 1 for instance number
    , stats$database_instance di
    ERROR at line 9:
    ORA-00942: table or view does not exist
    I did exactly as it said while configuring the db. but seems like something is wrong. Can you please advise me on how can I install it in commandline.
    Thanks
    PK

  • Custom ETL processes creation in OWB

    Hi, we are working in a Oracle Utilities BI implementation project.
    Some of the KPI identified, need a custom development - creation of complete ETL processes:
    - Extractor in CC&B (in COBOL)
    - Workflows in OWB
    - Configuration in BI
    We were able to create 4 custom ETL processes (2 fact and 2 related dimensions) including the COBOL extract program, the OWB entities (Files, Staging tables, Tables, Mappings and Workflows) and the BI tables.
    We already have the data in the BI database and in some cases we are able to configure zones to show the data in BI.
    We are facing some problems when we try to use a custom Fact and a custom Dimension together, for instance:
    Case 1. When we create a zone to show : Number of quotes - Measure 1 : tf=CM_CF_QUOTE.FACT_CNT func=SUM dt=END_DATE_KEY the graph is displayed.
    Case 2. When we create a zone to show : Number of accepted quotes - Measure 1 : tf=CM_CF_QUOTE.FACT_CNT func=SUM dt=END_DATE_KEY the and Fixed Dimensional Filter 1 : tf=CM_CD_QUOTE_TYPE.QUOTE_STATUS_CD oper==(10), the graph is displayed.
    Case 3. When we create a zone to show : Number of ongoing quotes - Measure 1 : tf=CM_CF_QUOTE.FACT_CNT func=SUM dt=END_DATE_KEY the and Fixed Dimensional Filter 1 : tf=CM_CD_QUOTE_TYPE.QUOTE_STATUS_CD oper==(20), the graph is displayed.
    Case 4. When we create a zone to show : Number of ongoing quotes - Measure 1 : tf=CM_CF_QUOTE.FACT_CNT func=SUM dt=END_DATE_KEY the and Fixed Dimensional Filter 1 : tf=CM_CD_QUOTE_TYPE.QUOTE_STATUS_CD oper==(20), the graph is displayed.
    Case 5. But when we create a zone to show : Number of quotes sliced by quote status - Measure 1 : tf=CM_CF_QUOTE.FACT_CNT func=SUM dt=END_DATE_KEY the and Dimensional Filter 1 : tf=CM_CD_QUOTE_TYPE.QUOTE_STATUS_CD, no graph is displayed.
    Case 6. A different problem occurs in the single fact. We try to show the number of processes of a given type. The type of the process is a UDDGEN. When we load the zone to show the graph the following error appears: Measure1 : tf=CM_F_CF_CSW.FACT_CNT func=SUM dt=ACTV_DATE_KEY and Fixed Dimensional Filter 1 : tf=CM_F_CF_CSW.UDDGEN1 oper==(ENTRADA)
    An error is displayed: No join defined between fact table CM_F_CF_CSW and dimension table CM_F_CF_CSW. The dimension tabelentered in the zone parameter could be invalid for this fact table.
    Does anyone had the same problem??????
    Edited by: user11256032 on 10/Jun/2009 11:51
    Edited by: user11256032 on 10/Jun/2009 11:54

    Hi user11256032, I just stumbled upon this by accident. The reason no-one has answered yet, is because it is in the wrong forum. (I can understand that you thought it belonged here.) Please post the question to the Oracle Utilities forum, which is here Utilities If that link doesn't work, go to Forum Home, then choose Industries, then Utilities. You may have to select "More ..." on Industries.
    Actually, I suspect there was an SR created for these, so your question may have been answered already.
    If you don't mind me asking, which customer is this for?
    Jeremy

  • ETL process steps in OWB

    Hi friends,
    <li> can i know the etl steps involved in owb. if so can you provide the etl steps now.
    <li> And also there is any diagram for owb, Covering owb etl process from source to target(including BI components) too. (like available for infomatica in a diagrammatical manner) like how it is taking data from source and how it is performing etl process and how it is transforming it to data warehouse and how BI components utilizing it.
    Thanks
    Regards,
    Saro.
    Your Answers will be marked.

    Hi Saro,
    OWB does not enforce some specific steps on you. You define your datawarehouse architecure according to your needs. OWB perfectly support you to
    + extract the data from the source systems or files
    + load into the database (staging)
    + process the data within the database, e.g. loading it from the staging area into the core (normalized or star schema)
    + load it into OLAP cubes
    + design and monitor the overall process using process flows
    Regards,
    Carsten.

  • Automatic MOLAP cube : Proactive caching was cancelled because newer data became available

    When I process the cube manually after processing Dimension, It works fine. But when I append data into database column, It performs proactive caching, at that time it fails.
    Sometimes: It does not get key attribute because measure gets processed before dimension
    and sometimes it gives error:  
    Proactive caching was cancelled because newer data became available  Internal error: The operation terminated unsuccessfully. OLE DB error:
    OLE DB or ODBC error: Operation canceled; HY008. Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'call dim Monthly 201401 2', Name of 'call dim Monthly
    201401 2' was being processed. Errors in the OLAP storage engine: An error occurred while the 'MSW' attribute of the 'call dim Monthly 201401 2' dimension from the 'callAnalysisProject' database was being processed.  etc....

    I have also seen this error occur in other scenarios.
    In the first if you have set Pro-Active caching to refresh every 1 minute and your query takes 2 minutes to refresh the error above can be displayed.  Solution increase your refresh time or tune your Pro-Active caching query.
    In connection with the above if your server is limited on available resources this can also cause the slower query response times during refresh and the message above.

  • Communication between multiple processes

    Hi there!
    I have once again a problem concerning paralel processing in ABAP.
    The problem is that:
    I want to write a programm which invokes a process that can recursive invoke another process and so on.
    Let me try to picture it out:
    -> means invokes
    Main Program -> Thread1
    Thread 1 -> Thread 2
    Thread 2 -> Thread 3
    Thread 2 -> Thread 4
    Thread 4 -> Thread 5
    Thread 5 -> Thread 6
    As you can see I have several Threads invoking another Thread. The structure of Threads invoking each other will be dynamic. Now I face the following problem:
    I want just a few threads to run at the same time (let's say for now 3) and there are dependencies between the Invoker and the invoked Thread (e.g. Thread 3 needs some information from Thread 2).
    How can I let my Main Program know that all the data is ready? Does the WAIT UNTIL statement also applies for these nested threads?
    How does Thread 5 e.g. know that there are already too much processes running and he has to wait?
    Is there a possibility how I can queue these processes?
    sth like:
    Thread 1 - Thread 2 - Thread 3 - Thread 4 - Thread 5 - Thread 6
    If prerequisites for Thread 3 are not fullfilled it would look like:
    Thread 1 - Thread 2 - Thread 4 - Thread 5 - Thread 6
    and so on...
    The problem is the communication over the bounds of Threads. This dynamic structure is neccessary due to the large data amount that I have to handle. Due to restrictions I can only use a function group and a report. No database tables or stuff like that is allowed.
    I hope I was able to point out my problem. If it was to unclear please let me know it, then I will try to specify it more.
    Thank you in advance for your help.
    Best Regards,
    Sebastian

    @ Sandeep:  Thanks for your answer I am going to have a look at that
    @ Thomas: These nested Calls would be useful because of the amount of data that has to be processed. Think about a Tree with over 200.000 entries. For each entry there is the requirement for Check if corresponding dataset is right, if necessary Adapt data and Update on the database.
    The approach with the nested threads would check a single node, looks whether there are child nodes and if so starts a thread for processing the child nodes. These would check each child node and if neccessary starts another thread and so on. Child nodes can only be changed if parent change was successful so I have the dependency right here.
    Yesterday I had another idea that should work:
    My Main Programm first checks the root node and then the direct childs of the root node (1st hierarchy level). Then invokes a thread for each child node which has again child nodes and for which changes applied (2nd hierarchy level. Each thread 'returns' a list of nodes for which once again the main program should invoke another thread and so on. (This would build up a queue for processing within the Main Program)
    It is a similar approach to the think with nested threads but the control structure is more clear and there is no nesting of threads neccessary.
    Thanks and Best Regards,
    Sebastian

  • How to Exchange Data and Events between LabVIEW Executable​s

    I am having some trouble determining how to design multiple programs that can exchange data or events between each other when compiled into separate executables. I will layout the design scenario:
    I have two VIs, one called Status and the other Graph.  The Status VI displays the serial number and status of each DUT being tested (>50 devices).  The Status VI has one timed loop along with a while loop that contains an event structure.  If the user clicks on the DUT Status Cluster the event structure needs to pass the serial number to the Graph VI.  The Graph VI when called fetches the records for the DUT based on the Serial Number and time frame.  This VI is a producer/consumer so the user can change the time frame of the records to display onto the front panel graph.
    I have a couple reasons the VIs need to be separated into independent applications. One being the underlying database fetches tends to slow the threads down between the two VIs; the other is that they may be distributed into separate systems (don't consider this in the design criteria).
    I have been researching the available methods to pass the serial number to the Graph VI.  My initial idea was to us a command line argument, but this does not allow the Status VI to pass another Serial Number to the Graph once it has started (I do not want to allow the user to execute multiple Graph applications because the database query can load down the server).
    Is there a program method that I can implement between the two VIs that will allow me to compile them as two executables, and allow the Status program the repeatedly send an updated serial number event to the Graphs program.
    I have reviewed many methods; Pipes (oglib_pipe), Action Engine, and Shared Variable.  I am not sure which method will give me the ability to use a Event driven structure to inform the Graphs program when to update the serial number.  Any suggestions and tutorials or examples would be greatly appreciated.

    I have used the Action Engine (aka: functional global) approach for many years and it works well. Something to think about is that if you make the event's datatype a variant the only code that will need to know the actual structure of the event will be the function that needs to respond to it. Hence, a single event can service multiple functions.
    Simply create a cluster containing an enum typedef that is a list of the functions that the event will service, and a variant that will be the function event data. From anywhere in the code you can fire the event (via the functional global) by selecting the function from the enum and converting function specific data to a variant. On the receiving end the event handler uses the enum to determine the function that is to get the data and sends the variant to it. The event handler doesn't know or care what the actual event data is so you could in theory add new functions without modifying the event handler.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Unique Index Error while running the ETL process

    Hi,
    I have Installed Oracle BI Applications 7.9.4 and Informatica PowerCenter 7.1.4. I have done all the configuration steps as specified in the Oracle BI Applications Installation and Configuration Guide. While running the ETL process from DAC for Execution Plan 'Human Resources Oracle 11.5.10' some tasks going to status Failed.
    When I checked the log files for these tasks, I found the following error
    ANOMALY INFO::: Error while executing : CREATE INDEX:W_PAYROLL_F_ASSG_TMP:W_PRL_F_ASG_TMP_U1
    MESSAGE:::java.lang.Exception: Error while execution : CREATE UNIQUE INDEX
    W_PRL_F_ASG_TMP_U1
    ON
    W_PAYROLL_F_ASSG_TMP
    INTEGRATION_ID ASC
    ,DATASOURCE_NUM_ID ASC
    ,EFFECTIVE_FROM_DT ASC
    NOLOGGING PARALLEL
    with error java.sql.SQLException: ORA-12801: error signaled in parallel query server P000
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    EXCEPTION CLASS::: java.lang.Exception
    I found some duplicate rows in the table W_PAYROLL_F_ASSG_TMP with the combination of the columns on which it is trying to create INDEX. Can anyone give me information for the following.
    1. Why it is trying to create the unique index on the combination of columns which may not be unique.
    2. Is it a problem with the data in the source database (means becoz of duplicate rows in the source system).
    How we need to fix this error. Do we need to delete the duplicate rows from the table in the data warehouse manually and re-run the ETL process or is there any other way to fix the problem.

    This query will identify the duplicate in the Warehouse table preventing the Index from being built:
    select count(*), integration_id, src_eff_from_dt from w_employee_ds group by integration_id, src_eff_from_dt having count(*)>1;
    To get the ETL to finish issue this delete to the W_EMPLOYEE_DS table:
    delete from w_employee_ds where integration_id = '2' and src_eff_from_dt ='04-JAN-91';
    To fix it so this does not happen again on another load you need to find the record in the Vision DB, it is in the PER_ALL_PEOPLE_F table. I have a Vision source and this worked:
    select rowid, person_id , LAST_NAME FROM PER_ALL_PEOPLE_F
    where EFFECTIVE_START_DATE = '04-JAN-91';
    ROWID PERSON_ID
    LAST_NAME
    AAAWXJAAMAAAwl/AAL 6272
    Kang
    AAAWXJAAMAAAwmAAAI 6272
    Kang
    AAAWXJAAMAAAwmAAA4 6307
    Lee
    delete from PER_ALL_PEOPLE_F
    where ROWID = 'AAAWXJAAMAAAwl/AAL';

  • Difference between Integration Process and Monitoring Process

    Hi Experts,
    What is the difference between Integration Process and Monitoring Process available in PI7.1?
    SAP says that Monitoring process is a special kind of integration process that receives the event messages.
    My doubt is even integration process can receive the event messages.
    Why these two different type of entities are created for the same purpose?
    And what is the technical difference between the two in terms of PI perspective?
    Regards,
    Sami.

    My question is now answered.
    [https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/70a25d3a-e4fa-2a10-43b5-b4169ba3eb17]
    On page 17 of this pdf following sentence is mentioned :-
    From technical perspective, there is no difference between monitoring process and integration process.
    Though logically those are two deifferent things.
    Monitoring porcesses are used to receive only event messages that is comprises of event data only.
    Like Purchase order creation is a event and its event message will have the event data like Order Id, Created on, Created by, Quantity etc., instead of whole purchase order.
    Where as Integration Process is a way to provide solution in some specific circummtances like where we have to automate our process or where we need something in between for the course of communication.
    Guys thanks for your precious time.
    Regards,
    Sami.

  • Data Transfer / Communication between Live Applications

    Gurus,
    I am basically an E-Business Sutie Developer. This is my first experience posting questions in Application and Architecture community.
    Please read the following and let me know if this is the correct thread to post this question also it would be appreciable if you could provide suggestions / solutions for my question.
    Q:
    We are on E-Bus Suite 11.5.9 with custom applications built using Oracle Forms. Recently one of our department decided to buy a custom software to replace one of the application. There are other applications which we built still running in E-Bus Suite. The custom software is based on .NET and SQL Server. We at IT rite now researching how to data transfer between our existing applications and the new software. They both were in different platforms.
    This is a new initiative for us because all these years we use to work on oracle applications and this is the first time where we have to go away from our forte. This could be simple for people with experience in related field but this is challenging for us.
    Your views and suggestions are highly valuable.
    Thank You.
    Karthik.

    Hi,
    Thank You for the reply. Below are the notes from person who already did some of the research. I am not sure whether to go with the below mentioned or to research more...
    Below are the following options to load data from SQL server to Oracle, I read limited on each option. See if something convince you and I will explore more on that option..
    1.     Linked Servers: Linked Servers can be setup on the SQL server instance and then data can be pushed out to the Oracle schema.
    a. Limitation in terms of data type supported by provider/ODBC driver used for creating
    linked servers, Performance good for decent size tables
    b. Advantage: All SLQ based
    http://msdn.microsoft.com/en-us/library/ms188279.aspx
    http://www.codeproject.com/Articles/35943/How-to-Config-Linked-Servers-in-a-Minute
    2.     Oracle Hetereogeneous services: This is Oracle’s answer to Microsoft’s linked servers. By setting it up,
    this can be used to pull data from SQL server db and load to Oracle.
    Same limitations and advantages as 1.
    http://www.csee.umbc.edu/portal/help/oracle8/server.815/a67784/hs_ch6.htm
    3.     DTS (Data Trnasformation Services) or SSIS (SQL Server Integration Services) : Microsoft SQL server 2000 comes
    with DTS and SQL 2005 with SSIS.
    SSIS is robust than DTS for this job and can be used to load data up in the Oracle Schema.
    These tools can be used to provide a real ETL process for data loading.
    4.     Scripting Framework: Scripts can be written to bcp data from a SQL servere db and then use SQL Loader to
    load data to oracle schema
    5.     Third Party tools: A couple of third party tools also provide these options.
    -----------------------------------------------------------------------------------------------------------------------------------------------------------

  • SQL 2012 Database Availability Group - Force Automatic Failover

    Hi All,
    I'd appreciate some help in understanding the following scenario in my test environment.
    I have created a DAG with 2 replica servers (both of which are HyperV VM's running W2012 Std).
    From a client PC in my test lab, I can connect to the virtual listener of my DAG and confirm via the "select @@servername" command that I am connecting to the primary replica server.
    Using the Failover Wizard, I can easily move to primary instance between my 2 nodes and the command above confirms that the primary replica server has changed. So far so good.
    What I wanted to test, was what would happen to my DAG in the event of a complete loss of power to the server that was acting as the primary replica server. At first, I thought I would stop the SQL Server service on the primary server, but this did not result
    in my DAG failing over to the secondary replica. I have found out that the only way I can do this is by effectively shutting down the primary server in a controlled manner.
    Is there any reason why either stopping the SQL Server service on the primary replica, or indeed forcing a power off of the primary replica does not result in the DAG failing over to the secondary replica?
    Thanks,
    Bob

    Hi,
    I would verify if Database Availability Group means AlwaysOn Availability Group.
    How did you set the FailureConditionLevel?
    Whether the diagnostic data and health information returned by sp_server_diagnostics warrants an automatic failover depends on the failure-condition level of the availability group. The failure-condition level specifies what failure conditions
    trigger an automatic failover. There are five failure-condition levels, which range from the least restrictive (level one) to the most restrictive (level five). For details about failure-conditions level, see:
    http://msdn.microsoft.com/en-us/library/hh710061.aspx#FClevel
    There are two useful articles may be helpful:
    SQL 2012 AlwaysOn Availability groups Automatic Failover doesn’t occur or does it – A look at the logs
    http://blogs.msdn.com/b/sql_pfe_blog/archive/2013/04/08/sql-2012-alwayson-availability-groups-automatic-failover-doesn-t-occur-or-does-it-a-look-at-the-logs.aspx
    SQL Server 2012 AlwaysOn – Part 7 – Details behind an AlwaysOn Availability Group
    http://blogs.msdn.com/b/saponsqlserver/archive/2012/04/24/sql-server-2012-alwayson-part-7-details-behind-an-alwayson-availability-group.aspx
    Thanks.
    Tracy Cai
    TechNet Community Support
    Hi,
    Thanks for the reply.
    It's an AlwaysOn Availability Group.
    In my test lab, I have changed the quorum configuration to a file share witness and that has allowed an automatic failover when I turn the primary replica server off (rather than power it off).
    I'll take a look at the links you provided.
    Regards,
    Bob

  • Start Delta IP Only If Data Is Available

    Hi together,
    We have an InfoPackage, that delivers data using a DataSource in Delta-Mode. Unfortunately the InfoPackage only sporadically delivers data, mostly it returns zero records. Our question is now, how to find out, whether data is available and if not, to discontinue further processing of a process chain and to preserve system resources.
    The corresponding DataSource is of type Extract Datasource for an InfoCube in the same system. I tried to get relevant information from request tables, but couldn't establish necessary relations as yet....

    Hello Timo,
    The only standard way I can think of is to set the traffic light to yellow or red when no data is available (transaction SPRO / Automated process/ Extraction monitor settings/SEt traffic light color). But as this customizing is system wide, it can be too brutal in your case.
    Anyway, you can still create a program that will read the delta queues in the system source (the table to be read will depend on the dataSource), and use the result in a "Decision between multiple values " process chaine component. You will then be able to continue or not the process flow depending on the delta content.
    Regards,
    Fred

  • ETL processing Performance and best practices

    I have been tasked with enhancing an existing ETL process. The process includes dumping data from a flat file to staging tables and process records from the initial tables to the permanent table. The first step, extracting data from flat file to staging
    tables is done by Biztalk, no problems here. The second part, processing records from staging tables and updating/inserting permanent tables is done in .Net. I find this process inefficient and prone to deadlocks because the code loads the data from the initial
    tables(using stored procs) and loops through each record in .net and makes several subsequent calls to stored procedures to process data and then updates the record. I see a variety of problems here, the process is very chatty with the database which is a
    big red flag. I need some opinions from ETL experts, so that I can convince my co-workers that this is not the best solution.
    Anonymous

    I'm not going to call myself an ETL expert, but you are right on the money that this is not an efficient way to work with the data. Indeed very chatty. Once you have the data in SQL Server - keep it there. (Well, if you are interacting with other data
    source, it's a different game.)
    Erland Sommarskog, SQL Server MVP, [email protected]

Maybe you are looking for

  • Acrobat 9 Pro - Problems converting word 2003 to pdf

    When I print word to pdf, the resulting pdf document has missing text and paragraphs.  The only way I seem to be able to fix this is to uninstall and reinstall the original software.  It is driving me mad.  I have done this about 20 times now. Can an

  • Sort Order in Column Browser iTunes 10

    Several versions back, the sort order was numbers first and then letters. In other words, sorted in ascii sort order. For some reason, iTunes decided to change this. Will you please put an option in iTunes to change it to ascii sort order. Thank you

  • Aperture 3 and Remote Library Management

    I'm a photographer and often travel internationally. Having experienced data loss before, I'm trying to remove this factor altogether by using a Mac Mini Server that I can connect to while on the road. I have Lion Server installed, but not sure if th

  • Change Icon Name In App Library ?

    Is it possible to change the name of an Icon in a given Library ?? If possible, how can I do that ? Thank You, G

  • Entitlement Setting in 4.0 Portal Tools Keeps Blowing Up

    When we try to create an empty WLCS database (empty except for the required data), run an EBCC sync (to get all of the configuration files into the database, and then try to set page entitlements we get an exception. Any help would be appreciated as