Data loaded into ODS, but no activation possible

Hello Experts,
I've data loaded data into an ODS, but no activation is possible!
In Monitor it just say something like "not activated yet".
usually it only takes a minute to activate, so I'm confised.
any suggestions?
Thanx
Axel

Hi,
In SM37 -> give Job Name as BI_ODSA* and USer as  *  and check is there any job with Active status. If so, We can say ODS Activation is on process.
With rgds,
Anil Kumar Sharma .P

Similar Messages

  • PERFORM_CONFLICT_TAB_TYPE  shortdump during data loading into ODS

    HAI
    Im trying to load the data into ODS in Production and QA system . But im getting the shortdump says that <b>PERFORM_CONFLICT_TAB_TYPE</b> .
    But in development system , data is loading into ODS.
    So please tell me wht i have to do
    i will assing the points
    rizwan

    here it is..
    Note 707986 - Writing in trans. InfoCubes: PERFORM_CONFLICT_TAB_TYPE
    Summary
    Symptom
    When data is written to a transactional InfoCube, the termination PERFORM_CONFLICT_TAB_TYPE occurs. The short dump lists the following reasons for the termination:
               ("X") The row types of the two tables are incompatible.
               ("X") The table keys of the two tables do not correspond.
    Other terms
    transactional InfoCube, SEM, BPS, BPS0, APO
    Reason and Prerequisites
    The error is caused by an intensified type check in the ABAP runtime environment.
    Solution
    Workaround for BW 3.0B (SP16-19), BW 3.1 (SP10-13)
               Apply the attached correction instructions.
    BW 3.0B
               Import Support Package 20 for 3.0B (BW3.0B Patch20 or SAPKW30B20) into your BW system. The Support Package is available oncenote 0647752 with the short text "SAPBWNews BW3.0B Support Package 20", which describes this Support Package in more detail, has been released for customers.
    BW 3.10 Content
               Import Support Package 14 for 3.10 (BW3. 10 Patch14 or SAPKW31014) into your BW system. The Support Package is available once note 0601051  with the short text "SAPBWNews BW 3.1 Content Support Package 14" has been released for customers.
    BW3.50
               Import Support Package 03 for 3.5 (BW3.50 Patch03 or SAPKW35003) into your BW system. The Support Package is available once note 0693363 with the short text "SAPBWNews BW 3.5 Support Package 03", which describes this Support Package in more detail, has been released for customers.
    The notes specified may already be available to provide advance information before the Support Package is released. However, in this case, the short text still contains the term "Preliminary version" in this case.
    Header Data
    Release Status: Released for Customer
    Released on: 18.02.2004  08:11:39
    Priority: Correction with medium priority
    Category: Program error
    Primary Component: BW-BEX-OT-DBIF Interface to Database
    Secondary Components: FIN-SEM-BPS Business Planning and Simulation
    Releases
    Software
    Component Release From
    Release To
    Release And
    subsequent
    SAP_BW 30 30B 30B  
    SAP_BW 310 310 310  
    SAP_BW 35 350 350  
    Support Packages
    Support
    Packages Release Package
    Name
    SAP_BW_VIRTUAL_COMP 30B SAPK-30B20INVCBWTECH
    Related Notes
    693363 - SAPBWNews BW SP03 NW'04 Stack 03 RIN
    647752 - SAPBWNews BW 3.0B Support Package 20
    601051 - SAPBWNews BW 3.1 Content Support Package 14
    Corrections Instructions
    Correction
    Instruction Valid
    from Valid
    to Software
    Component Ref.
    Correction Last
    Modifcation
    301776 30B 350 SAP_BW J19K013852 18.02.2004  08:03:33
    Attributes
    Attribute Value
    weitere Komponenten 0000031199
    German
    English
    Vishvesh

  • No data load into ODS although sourcesystem connection is ok

    Hello experts,
    although everthing is ok with my sourcesystem connection, SAP BW loads no data into ODS and Cube. In the ODS-administration-view the requests are registrated, QM-Satus is green but ODS-id is zero.
    Everything worked fine when running the process chain, but suddenly it stops when activating the data in the ods because there are no data in it.
    Any suggestions?
    thanx
    Axel

    Hi Axel,
    well the error message you specified is only poosible if the datasource is not installed.I m a bit confused.
    Check if you are able to see this datasource in rsa6 in the source system.
    Also,try replicating the source system on the BW side,then check your respective infosource,whether they are available..?
    hope it helps,
    regards,
    Parth.

  • Total number of records loaded into ODS and in case of Infocube

    hai
    i loaded some datarecords from Oracle SS in ODS and Infocube.
    My SOurceSytem guy given some datarecords by his selection at Oracle source system side.
    how can i see that 'how many data records are loaded into ODS and Infocube'.
                     i can check in monitor , but that not correct(becz i loaded second , third time by giving the ignore duplicate records). So i think in monitor , i wont get the correct number of datarecords loaded in case of ODS and Infocube.
    So is there any transaction code or something to find number records loaded in case of ODS and Infocube .
    ps tell me
    i ll assing the points
    bye
    rizwan

    HAI
    I went into ODS manage and see the 'transferred' and 'added' data records .Both are same .
    But when i total the added data records then it comes 147737.
    But when i check in active table(BIC/A(odsname)00 then toal number of entries come 1,37,738
    why it is coming like that difference.......
    And in case of infocube , how can i find total number of records loaded into Infocube.(not in infocube).
               Like any table for fact table and dimension tables.
    pls tell me
    txs
    rizwan

  • Delete and full load into ODS versus Full load into ODS

    Can someone please advise, that in the context of DSO being populated with overwrite function, would the FULL LOAD in following two options (1 and 2) take the same amount of time to execute or one would take lesser than the other. Please also explain the rationale behind your answer
    1. Delete of data target contents of ODS A, Followed by FULL LOAD into ODS A, followed by activation of ODS
    2. Plain FULL LOAD into ODS A (i.e. without the preceeding delete data target contents step), Followed by activation of ODS
    Repeating the question - whether FULL LOAD in 1 will take the same time as FULL LOAD in 2, if one takes lesser than the other then why so.
    Context : Normal 3.5 ODS with seperate change log, new and active record tables.

    Hi,
    I think the main difference is that you with the process "Delete and full load" you will only get the full loaded data inside the dso. With full load you the system uses the modify logic (overwrite for all datasets already in the dso) and insert for all new datasets which are not inside yet inside the dso
    Regards
    Juergen

  • Adding leading zeros before data loaded into DSO

    Hi
    In below PROD_ID... In some ID leading zeros are missing before data loaded into BI from SRM into PROD_ID. Data type is character. If leading zeros are missing then data activation of DSO is failed due to missing zeros and have to manually add them in PSA table. I want to add leading zeros if they're missing before data loaded into DSO.... total character length is 40.. so e.g. if character is 1502 then there should be 36 zeros before it and if character is 265721 then there should be 34 zeros. Only two type of character is coming either length is 4 or 6 so there will be always need to 34 or 36 zeros in front of them if zeros are missing.
    Can we use CONVERSION_EXIT_ALPHPA_INPUT functional module ? As this is char so I'm not sure how to use in that case.. Do need to convert it first integer?
    Can someone please give me sample code? We're using BW 3.5 data flow to load data into DSO.... please give sample code and where need to write code either in rule type or in start routine...

    Hi,
    Can you check at info object level, what kind of conversion routine it used by.
    Use T code - RSD1, enter your info object and display it.
    Even at data source level also you can see external/internal format what it maintained.
    if your info object was using ALPHA conversion then it will have leading 0s automatically.
    Can you check from source how its coming, check at RSA3.
    if your receiving this issue for records only then you need to check those records.
    Thanks

  • How can i add the dimensions and data loading into planning apllications?

    Now please let me know how can i add the dimensions and data loading into planning apllication without manuallly?

    you can use tools like ODI or DIM or HAL to load metadata & data into planning applications.
    The data load can be done at the Essbase end using rules file. But metadata changes should flow from planning to essbase through any of above mentioned tools and also there are many other way to achieve the same.
    - Krish

  • Data load into SAP ECC from Non SAP system

    Hi Experts,
    I am very new to BODS and I have want to load historical data from non SAP source system  into SAP R/3 tables like VBAK,VBAP using BODS, Can you please provide steps/documents or guidelines on how to achieve this.
    Regards,
    Monil

    Hi
    In order to load into SAP you have the following options
    1. Use IDocs. There are several standard IDocs in ECC for specific objects (MATMAS for materials, DEBMAS for customers, etc., ) You can generate and send IDocs as messages to the SAP Target using BODS.
    2. Use LSMW programs to load into SAP Target. These programs will require input files generated in specific layouts generated using BODS.
    3. Direct Input - The direct input method is to write ABAP programs targetting on specific tables. This approach is very complex and hence a lot of thought process needs to be applied.
    The OSS Notes supplied in previous messages are all excellent guidance to steer you in the right direction on the choice of load, etc.,
    However, the data load into SAP needs to be object specific. So targetting merely the sales tables will not help as the sales document data held in VBAK and VBAP tables you mentioned are related to Articles. These tables will hold sales document data for already created articles. So if you want to specifically target these tables, then you may need to prepare an LSMW program for the purpose.
    To answer your question on whether it is possible to load objects like Materials, customers, vendors etc using BODS, it is yes you can.
    Below is a standard list of IDocs that you can use for this purpose to load into SAP ECC system from a non SAP system.
    Customer Master - DEBMAS
    Article Master - ARTMAS
    Material Master - MATMAS
    Vendor Master - CREMAS
    Purchase Info Records (PIR) - INFREC
    The list is endless.........
    In order to achieve this, you will need to get the functional design consultants to provide ETL mapping for the legacy data to IDoc target schema and fields (better to ahve sa tech table names and fields too). You should then prepare the data after putting it through the standard check table validations for each object along with any business specific conversion rules and validations applied. Having prepared this data, you can either generate flat file output for load into SAP using LSMW programs or generate IDoc messages to the target SAPsystem.
    If you are going to post IDocs directly into SAP target using BODS, you will need to create a partner profile for BODS to send IDocs and define the IDocs you need as inbound IDocs. There are few more setings like RFC connectivity, authorizations etc, in order for BODS to successfully send IDocs into the SAP Target.
    Do let me know if you need more info on any specific queries or issues you may encounter.
    kind regards
    Raghu

  • Aggregating data loaded into different hierarchy levels

    I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
    I read the help in DML Reference of the OLAP Worksheet and it said the follow:
    When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
    LIMIT all_but_q4 TO ALL
    LIMIT all_but_q4 REMOVE 'Q4'
    Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
    How to do it this for more than one dimension?
    Above i wrote my case of study:
    DEFINE T_TIME DIMENSION TEXT
    T_TIME
    200401
    200402
    200403
    200404
    200405
    200406
    200407
    200408
    200409
    200410
    200411
    2004
    200412
    200501
    200502
    200503
    200504
    200505
    200506
    200507
    200508
    200509
    200510
    200511
    2005
    200512
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    2004 NA
    200412 2004
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    2005     NA
    200512 2005
    DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
    EQ -
    aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
    COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 4,00 ---> here its right!! but...
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00 ---> here must be 30,00 not 10,00
    200512 NA
    DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
    T_TIME PRUEBA2_IMPORTE_STORED
    200401 NA
    200402 NA
    200403 NA
    200404 NA
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 NA
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00
    200512 NA
    DEFINE OBJ262568349 AGGMAP
    AGGMAP
    RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
    args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
    AGGINDEX NO
    CACHE NONE
    END
    DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
    T_TIME_AGGRHIER_VSET1 = (H_TIME)
    DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
    T_TIME_AGGRDIM_VSET1 = (2005)
    Regards,
    Mel.

    Mel,
    There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
    1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
    = solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
    2. Data is loaded at both a detail level and it's ancestor, as in your example case.
    = the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
    Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
    To solve your usage case I would suggest a hierarchy that looks more like this:
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    200412 2004
    2004_SELF 2004
    2004 NA
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    200512 2005
    2005_SELF 2005
    2005 NA
    Resulting in the following cube:
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    200412 NA
    2004_SELF NA
    2004 4,00
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    200512 NA
    2005_SELF 10,00
    2005 30,00
    3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
    = this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
    4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
    = often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation.

  • How to delete the data loaded into MySQL target table using Scripts

    Hi Experts
    I created a Job with a validation transformation. If the Validation was failed the data passed the validation will be loaded into Pass table and the data failed will be loaded into failed table.
    My requirement was if the data was loaded into Failed database table then i have to delete the data loaded into the Passed table using Script.
    But in the script i have written the code as
    sql('database','delete from <tablename>');
    but as it is an SQL Query execution it is rising exception for the query.
    How can i delete the data loaded into MySQL Target table using scripts.
    Please guide me for this error
    Thanks in Advance
    PrasannaKumar

    Hi Dirk Venken
    I got the Solution, the mistake i did was the query is not correct regarding MySQL.
    sql('MySQL', 'truncate world.customer_salesfact_details')
    error query
    sql('MySQL', 'delete table world.customer_salesfact_details')
    Thanks for your concern
    PrasannaKumar

  • How to make data loaded into cube NOT ready for reporting

    Hi Gurus: Is there a way by which data loaded into cube, can be made NOT available for reporting.
    Please suggest. <removed>
    Thanks

    See, by default a request that has been loaded to a cube will be available for reporting. Bow if you have an aggregate, the system needs this new request to be rolled up to the aggregate as well, before it is available for reporting...reason? Becasue we just write queries for the cube, and not for the aggregate, so you only know if a query will hit a particular aggregate at its runtime. Which means that if a query gets data from the aggregate or the cube, it should ultimately get the same data in both the cases. Now if a request is added to the cube, but not to the aggregate, then there will be different data in both these objects. The system takes the safer route of not making the 'unrolled' up data visible at all, rather than having inconsistent data.
    Hope this helps...

  • Summing up key figure in Cube - data load from ODS

    Hello Gurus,
    I am doing a data load from ODS to cube.
    The records in ODS are at line-item level, and all needs to be summed up at header level. There is only one key-figure. ODS has header and line-item fields.
    I am loading only header field data ( and not the item-field data) from ODS to Cube.
    I am expecting only-one record in cube ( for all the item-level records) with all the key-figures summed up. But couldn't see that.
    Can anyone please explain how to achieve it.
    I promise to reward points.
    =====
    Example to elaborate my point.
    In ODS
    Header-field  item-field   quantity
    123                301          10
    123                302           20
    123                303           30
    Expected record in Cube
    Header-field       Quantity
       123                    60  
    ====================
    Regards,
    Pramod.

    Hello Oscar and Paolo.
    Thanks for the reply. I am using BW 7.0
    Paolo suggested:
    >>If you don't add item number to cube and put quantity as adition in update rules >>from ODS to cube it works.
    I did that still I get 3 records in cube.
    Oscar Suggested:
    >>What kind of aggregate do you have for your key figure in update rules (update >>or no change)?
    It is "summation". And it cannot be changed. (Or at least I do not know how to change it.)
    There are other dimensions in the cube - which corresponds to the field(s) in ODS.
    But, I just mentioned these two (i.e. header and item-level) for simplicity.
    Can you please help?
    Thank you.
    Pramod.

  • AWM Newbie Question: How to filter data loaded into cubes/dimensions?

    Hi,
    I am trying to filter the amount of data loaded into my dimensions in AWM (e.g., I only want to load like 1-2 years worth of data for development purposes). I can't seem to find a place in AWM where you can specify a WHERE clause...is there something else I must do to filter data?
    Thanks

    Hi there,
    Which release of Oracle OLAP are you using? 10g? 11g?
    You can use database views to filter your dimension and cube data and then map these in AWM
    Thanks,
    Stuart Bunby
    OLAP Blog: http://oracleOLAP.blogspot.com
    OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
    OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
    DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html

  • Data Load to PA Info Type 491 - Possible?

    I'm having difficulty locating a (non-manual) method of loading data into Infotype 491 (PA0491).
    LSMW runs in batch-input mode.  This seems to have the effect of not displaying the infotype 491 screen in transaction PA40.  (I don't think this is configuration - but if I'm wrong, please tell me.)
    I have found no FM, BAPI, or IDOC to support input into this table.  There are a few export programs.
    Any help is much appreciated.
    Thanks,
    Jeff

    Hi all
       I have the same requirement for the end user.
      Is it a good idea update active Table of ODS behind an ABAP Program Z?, I understood that you can't have log history... and you can't delete request.. because you will not generate this..
      We actually have one ODS that contain invoices... all the fields that contain the invoice..like material, vendor, etc.. was updated according to ABAp programa that we specify on start routine, each time that we need to update these fields we need to unload fros ODS to PSA and load again with dtp...
      Let me copy more details:
    Actually, we have an DSO that is updated each week, we load the information into different PSA, all of this PSA pass to ODS for one info source and transformation rule, in this transformation rule we have an ABAP rotine that have some validation and assign some values for different fields.
    In this procees everthing look fine, actually If we need to update this information from DSO (that was loaded and the user need to update some fields) we need to download information from DSO to PSA and load again with DTP process, this was fine.
    Actually the user want that this kind of changes will be apply on line, this mean that for example all the invoices that I have actually loaded into DSO need to be analyzed and update according new parameters that they specify in other tables.
    The DSO is standard, and contain three basis tables, active, delta, modified.
    My question is: Is it possible update directly table of active records of DSO with one Z program?? is it a good idea?, We want to discard the option of unload and load information each time that they need to update certain field that are calculated or updated whit the transformation rule or load.
    Than you for your help.

  • Data load from ODS to

    A basic question, I am working on a BI7 system but using the 3.5 BC objects for loading purchasing data.
    data source 2LIS_02_ITM
    -> Transfer rule
      -> infosource
        -> update rule
          -> 0PUR_O01
           -> update rule
             -> 0PUR_O02
    from the info package when i schedule the load immidiately (init with data) the data is successfuly loaded into the first ODS 0PUR_O01 but dosen't load into the second ODS 0PUR_O02
    How to do this?

    Hello Ramesh,
    I have some questions on this activation and loading of this DSO.
    1.We are using 7.0 do u still need to use 3.5 Info source/Update/transfer rules?
    2.If I only need to activate 0PUR_O01 DSO, and only data coming from one 2lis_02_itm what are my options
    3.When you activate the Business Content objects "In Data Flow before" How did you deal with Transformations/Transfer & Update rules did u get them from Business content or you have to create them your own.
    4.I hope you are using LO Cockpit.
    Appreciate your Help.
    Thanks
    Jagadish

Maybe you are looking for

  • Deprecation in Java

    I don't know what is deprecation in java please tell me brief about it?

  • HT4623 my ipod is stuck in recovery mode?How do I repair

    M ipod is stuck in recovery mode after update, how can i repair without losing my info

  • Attributes of attribute not visible in BEx query

    Hi Guys, I have 0MATERIAL InfoObject, which has product hierarchy attribute in it. The product hierarchy has four attributes prodhier level1, prohier level2, prodhier level3 and prodhier level4 in it. When i created query I can see product hierarchy

  • To set Default values for web items in WAD

    Hi Experts,       I am designing the web templates. There, I placed some drop-down boxes. In that item, I would like to set default values.       Usually in the drop-down box, 'show all values' is selected. Instead of displaying all values, the drop-

  • Where do mac apps save preferences to?

    Hey guys, New mac user here. Where does a Mac save a program's preferences to? By this I mean a DMG image program. Just wondering because I have two versions of the same program running on my computer and I can open each, but the preferences are shar