Archlinux apache2 indexing issue.

I made it so that the root directory is visable but the issue is if I go to http://mydomain.com/ it shows the root directory, but I want it to display index.html or index.php like normal, help please? (Only installed arch today)

I added index.php to directory listing in httpd.conf but get this error now,
systemctl status httpd.service
httpd.service - Apache Web Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled)
   Active: failed (Result: exit-code) since Fri 2014-01-31 02:48:51 GMT; 11s ago
  Process: 1557 ExecStop=/usr/bin/apachectl graceful-stop (code=exited, status=1/FAILURE)
  Process: 1581 ExecStart=/usr/bin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 1483 (code=exited, status=0/SUCCESS)
Jan 31 02:48:51 homeserver apachectl[1581]: Syntax error on line 241 of /etc/httpd/conf/httpd.conf:
Jan 31 02:48:51 homeserver apachectl[1581]: Invalid command 'DirectoryIndex:', perhaps misspelled or defined by a module not included in the server configuration
Jan 31 02:48:51 homeserver systemd[1]: httpd.service: control process exited, code=exited status=1
Jan 31 02:48:51 homeserver systemd[1]: Failed to start Apache Web Server.
Jan 31 02:48:51 homeserver systemd[1]: Unit httpd.service entered failed state.
Last edited by TheCyberShocker (2014-01-31 02:51:59)

Similar Messages

  • BIA index issues.

    Hi All,
    i have an  BIA-Index issue,The query-result is wrong when i executes  with BIA-Index in transaction RSRT .
    But when i checked the  master data of infoobject  is consistent. I have checked this in transaction RSRV.
    Could you please give any ideas how to correct this?
    Thanks,
    Vikram.

    Hi,
    Most likely you need to run an attribute change run. Attribute change run updates BIA index according to data stored in a InfoObject. You can do it from both a Process Chain or  manually through Administrator Workbench -> Tools -> Hierarchy/Attribute Changes.
    Regards,
    Adam

  • Indexing Issue : Idc Analyzer and other tools

    Hi All,
    I am facing some indexing issue with my ucm instance. Some of the files get stuck in wwGen Revision Status, some show “up to date” in index, but really are not until a re-index
    happens, etc.
    I used IDCAnalyzer to check the indexing issues but during analysis i got only an error pop-up saying "Error checking index" with no details in log.
    Are there any additional arguments/settings which can be used to get the details. Otherwise, this tool doesn't seem to be of much help in this case.
    Which are the other tools that can be used to check and correct the health of index?
    How can we purge unneeded revisions, history, and other in UCM to “clean up” and remove bloat (like archiver etc.)
    Note: I am using ucm 11g with SSXA.
    Edited by: PoojaC on Aug 1, 2012 10:26 PM

    Hi ,
    Analyzer tool should be used when there is a mismatch in the weblayout / vault files which causes indexer , archiver etc to fail .
    Read more about this from the following links:
    http://docs.oracle.com/cd/E23943_01/doc.1111/e10792/e01_interface.htm#CACFDIID
    http://docs.oracle.com/cd/E23943_01/doc.1111/e10792/c03_processes.htm#sthref268
    Hope this helps .
    Thanks
    Srinath

  • LSMW -Recording Index issue

    Hi,
    I was trying to add partners to a customer through XD02.
    I need to add multiple Partners to a customer.
    I recorded transaction but I have index issue .
    For example when I recorded it is 6th line where i need to enter but for next record it should add in 7th line.
    but 6th line is getting repalced.. how do i resolve this issue?
    Should I code something in FIELDMAPPING?
    Regards
    Prasad

    Hai Vara
    I am  giving Material master Upload through LSMW Direct Input Method
    Just Follow The Steps
    Using Tcode MM01 -- Maintain the source fields are
    1) mara-amtnr  char(18)
    2) mara-mbrsh  char(1)
    3) mara-mtart  char(4)
    4) makt-maktx  char(40)
    5) mara-meins  char(3)
    the flate file format is like this as follows
    MAT991,C,COUP,Srinivas material01,Kg
    MAT992,C,COUP,Srinivas material02,Kg
    AMT993,C,COUP,Srinivas material03,Kg
    MAT994,C,COUP,Srinivas material04,Kg
    MAT995,C,COUP,Srinivas material05,Kg
    goto Tcode LSMW
    give Project Name
         Subproject Name
         object Name
    Press Enter -
    Press Execute Button
    It gives 13 radio-Button Options
    do the following 13 steps as follows
    1) select radio-Button 1 and execute
       Maintain Object Attributes
    select Standard Batch/Direct Input
       give Object -- 0020
           Method -- 0000
       save & Come Back
    2) select radio-Button 2 and execute
       Maintain Source Structures
       select the source structure and got to click on create button
       give source structure name & Description
       save & Come Back
    3) select radio-Button 3 and execute
       Maintain Source Fields
       select the source structure and click on create button
       give
       first field
            field name    matnr
            Field Label   material Number
            Field Length  18
            Field Type    C
       Second field
            field name    mbrsh
            Field Label   Industrial Sector
            Field Length  1
            Field Type    C
       Third field
            field name    mtart
            Field Label   material type
            Field Length  4
            Field Type    C
       fourth field
            field name    maktx
            Field Label   material description
            Field Length  40
            Field Type    C
       fifth field
            field name    meins
            Field Label   base unit of measurement
            Field Length  3
            Field Type    C
      save & come back
    4) select radio-Button 4 and execute
       Maintain Structure Relations
       go to blue lines 
          select first blue line and click on create relationship button
          select Second blue line and click on create relationship button
          select Third blue line and click on create relationship button
      save & come back
    5) select radio-Button 5 and execute
       Maintain Field Mapping and Conversion Rules
       Select the Tcode and click on Rule button there you will select constant
       and press continue button
       give Transaction Code : MM01 and press Enter
       after that
       1) select MATNR field click on Source filed(this is the field mapping) select MATNR and press Enter
       2) select MBRSH field click on Source filed(this is the field mapping) select MBRSH and press Enter
       3) select MTART field click on Source filed(this is the field mapping) select MTART and press Enter
       4) select MAKTX field click on Source filed(this is the field mapping) select MAKTX and press Enter
       5) select MEINS field click on Source filed(this is the field mapping) select MEINS and press Enter
      finally     
      save & come back
    6) select radio-Button 6 and execute
       Maintain Fixed Values, Translations, User-Defined Routines
       Create FIXED VALUE Name & Description as MM01
       Create Translations Name & Description as MM01
       Create User-Defined Routines Name & Description as MM01
       after that delete  all the above three just created in the 6th step
       FIXED VALUE --MM01
       Translations --MM01
       User-Defined Routines --MM01
       come back
    7) select radio-Button 7 and execute
       Specify Files
       select On the PC (Frontend) -- and click on Create button(f5)
                                      give the path of the file like "c:\material_data.txt"
                                      description : -
                                      separators as select comma radiao- button
       and press enter   save & come back
    8) select radio-Button 8 and execute
       Assign Files
       Save & come back
    9) select radio-Button 9 and execute
       Read Files
       Execute
       come back
       come back
    10) select radio-Button 10 and execute
        Display Imported Data
        Execute and press enter
        come back
        Come back
    11) select radio-Button 11 and execute
        Convert Data
        Execute
        come back
        Come back
    12) select radio-Button 12 and execute
        Display Converted Data
        Execute & come back
    13) select radio-Button 13 and execute
        Start Direct Input Program
       select the Program
       select continue button
    go with via physical file
    give the lock mode as 'E'
    and execute
    Thanks & regards
    sreeni

  • RH8 - Why do once-fixed auto-sizing, ranked index issues reappear in 64-bit build environment?

    I'm using RoboHelp 8.0.2.208 fully integrated with Telelogic Synergy source control, and we generate WebHelp. In building the latest release (117) of our software the generated WebHelp has a couple of issues that weren't present in the previous builds. First, the auto-size pop-up topics are now cutting off text when they worked fine pre-117. Second, changes we'd made several releases ago to remove ranking in the index reverted, and we're back to ranked index results.
    Our long-suffering Build Manager and I are trying to troubleshoot why this happened, and there are a couple of variables as of this build. Our developers are moving to PowerBuilder 12 as of this release, but we don't see anything that points to this being connected to the issue. Another variable is that we're in a new build environment that uses 64-bit OS. (There were the recent sunspots, too, but, naa, don't think that's it...). While I've found in this forum a discussion regarding the cut-off text in pop-ups (BTW, no extra lines ever had to be added), as well as the discussion about how to remove ranking in the index (http://forums.adobe.com/message/2901513#2901513), we'd like to find out what caused these issues to show up again now.
    This also raises for us the issue of legacy files and how RoboHelp handles those in upgrading. It was when we moved from RoboHelp 5X to RoboHelp 8 that we first encounted and solved the ranked index issue. My recollection is that we also dealt with the pop-ups not autosizing properly at that time, although I haven't been able to find documentation about how we resolved that.
    Is there code within our custom skin, old .css file, template/layout, or ? that could be bringing legacy issues into play here? What do the files in WebHelp5Ext do? Will migrated template_skin and/or template_stock eventually cause legacy issues? Does each subsequent RoboHelp release update to current coding standards?
    Thanks!

    CelesteD wrote:
     1] Is there code within our custom skin, old .css file, template/layout, or ? that could be bringing legacy issues into play here?
    2] What do the files in WebHelp5Ext do? Will migrated template_skin and/or template_stock eventually cause legacy issues?
    3] Does each subsequent RoboHelp release update to current coding standards?
    1] Code that is good now will likely have problems in the future. Code created in the days of DOS can now be a problem. In this forum we can only address identified issues.
    2] Perhaps Willam can answer the first part. The second part has a similar answer to 1 above. Right now it should work, one day it might not but when and if that happens, I am sure Adobe will fix it.
    3] If you mean the code that makes RoboHelp work, only Adobe would know that. If you mean the web outputs, I understand they meet current W3C standards but the standards are everchanging, all Adobe can do is meet the current standards and change things when the standards change.
    Does that answer the questions?
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Search Indexing Issue

    Hi All,
    I am doing search indexing in ATG10. The following error is occurred while doing indexing .
    Can you please give me your valuable suggestions to solve this error.
    Content Item: atgrep:/CustomProductCatalog_production/book/prod117063?locale=el
    atg.repository.search.indexing.IndexingException: java.lang.RuntimeException: CONTAINER:atg.repository.RepositoryException; SOURCE:org.jboss.util.NestedSQLException: Transaction
    is not active: tx=TransactionImple < ac, BasicAction: 7f000001:c850:4ec0a480:60d81 status: ActionStatus.ABORTING >; - nested throwable: (javax.resource.ResourceException:
    Transaction is not active: tx=TransactionImple < ac, BasicAction: 7f000001:c850:4ec0a480:60d81 status: ActionStatus.ABORTING >)
    Regards,
    Rams

    Hi Rams,
    Even we have faced the similar issue and we have done the following
    1. Changed the default transaction timeout from 300 to 10000ms in /jboss/server/<server-name>/deploy/transaction-jboss-beans.xml
    2. Increased the JBOSS connection pool size to min as 40 and max as 100
    If you are unable to do indexing even after changing the above settings
    3. Modify AEConfig.xml present in the bin folder of searhEngineDirectory with the below settings
    MemoryIncrementSize - 0*00800000
    MemoryReserveSize - 0*100000000
    MemoryThreshold - 0*008000000
    4. Even if you are facing indexing issue, modify the below property in Configuration component
    /atg/search/routing/Configuration
    defaultPartitionPhysicalCount = 1 - default value is 19, you can have it one or two
    Hope the above informations helps you
    Shabari

  • Re indexing issues in Apple Loop Browser (LP 9.0.2 / OSX 10.6.1 / Digi 002r

    I'm trying to re index my loop browser in LP9 / OSX10.6.1
    Are you having the same problem?
    After trashing the 'Apple Loop Index', I pull in a folder from an external drive and notice that previous folders added to my browser have gone, including Jam Packs.
    Also, I can't find the 're-index' option within the loop browser search option (right click brings down a list but the re-index option isn't there)
    Interested to see whet you find.

    Yep, I also had issues, where indexing the LP8 way (dragging folders with loops from the Finder onto the Logic loop browser) just bluffed to work (I dropped the folder on the LB, a progress window appeared, things apparently got done, but then nothing, no new entries in the Loop Browser.
    So it seems that it is a L9 problem, since our Macs' are different in every aspect (Me: PPC- you: Intel, me: Leopard- you: Snow Leopard).
    I finally decided to put all my loops in their default install locations and reindex them from there:
    *Library/Audio/Apple Loops/Apple:*
    Jam Pack 4 - Symphony Orchestra
    Jam Pack Voices
    Jam Pack World Music
    *Library/Audio/Apple Loops/iLife Sound Effects:*
    (13 folders)
    *Library/Audio/Apple Loops/User Loops:*
    (some folders with my own and 3rd party Loops)
    *Library/Application Support/Garageband/Apple Loops:*
    Apple Loops for Garageband
    Jam Pack 3- Rythm Section
    for thorough reindexing, here are the steps:
    1. Quit Logic and/or any other app that uses Apple Loops.
    2. Go to all the locations I pointed out and trash all the index files from the *Apple Loops Index* folders. Also check the same locations in your Users/'you'/Library folder.
    3. Start Logic with a new empty project, open the loop browser window by hitting the o key. Now drag the folders mentioned above in bold onto your loop browser. Logic should now index them correctly, including the foldermenu !http://farm3.static.flickr.com/2743/4085966954bf4fc039d7o.png! in the loop browser.
    regards, Erik.

  • LiveCache - LC10 message - Index issue

    hi,
    LiveCache - LC10, - Problem Analysis ->Performance -> Database Analyser -> bottleneck report
    The message read as follows:
    LiveCache- Bottle-neck messages:
    2 tables contain > 1.000.000 records but only 20.000 rows will be sampled for statistics.
    Table SAPR3./SAPAPO/ORDKEY contains 8247892 rows(205921 pages), sample rows 20000
    Table SAPR3./SAPAPO/STOCKANC contains 1319385 rows(25413 pages), sample rows 20000
    Looks this would affect the processing and the delay may be an issue for CIF queue processing.
    Wondering, if sampling could be increased ? is there any oss note available to do this.  This index is implemented using BADI
    Any input on this issue is appreciated.
    Thanks,
    RajS

    Please check the following notes regarding changing the sample of the statistics run:
    Note 808060 - Changing the estimated values for Update Statistics
    Note 927882 - FAQ: SAP MaxDB Update Statistics
    Kind regards,
    Mark

  • Dual boot with archlinux and windows issue

    Hi again!
    This is what i did from the beginning :
    windows was already installed and then i installed archlinux after shrinking the windows partition . I made a partition for boot , one for swap and one for root(i wanted to do for home too , but only 4 primary partitions are allowed) . I made the boot partition bootable , and so was windows bootable and i got the message that there are two bootable partitions so it couldn't write the partition table . So removed the bootable tag from windows and left only the boot partition bootable . I then installed the grubloader to boot partition and then i edited that grubmenu and uncomment the part where windows was like it says in wiki. After restarting i could see the menu to choose between archlinux and windows , i chose windows , i succesfully logged in and after restarting again i couldn't see the grubloader and i automatically logged in windows .
    Does anyone know what might be the issue ??
    Thanks !
    Last edited by shak (2009-03-27 20:20:30)

    yes i only have one hard rive , i managed to find a solution not that elegant though , i 've installed acronis os selector from windows and i can choose between windows and linux with acronis on boot . If i choose linux i get to the grub menu and i can choose between arch linux and windows . So it seems that the grub menu is still there but it doesn't appear when i boot for some reason .
    Last edited by shak (2009-03-27 20:34:40)

  • Indexing issue

    Hi All,
    I've run into another problem while trying to index a web repository. The issue has to do with trying to index links that contain URL variables whose values have blank spaces (i.e. not encoded) in them. For example, this link indexs just fine:
    <a href="my folder\my file.htm?var1=hi%20steve" target=_blank>
    This link does not:
    <a href="my folder\my file.htm?var1=hi steve" target=_blank>
    It fails with the following error:
    processing failed     com.sapportals.wcm.repository.InvalidArgumentException: Bad Request
    They are the exact same link except for the first link contains %20 instead of a space in the value of the variable. Note that this issue only happens with a blank space in the URL variable value. It does not happen with blank spaces in the path or file name.
    Any ideas what I need to change to make this work?
    Thanks!
    -StephenS

    I opened an OSS message with SAP on this, and they've said that it's a bug. They are working on a fix for it.
    -StephenS

  • Z-index issues

    Hello
    I am having issues with z-index and layering. I have an iframe which is a full screen video acting as a background. I used a spry tabbed panels widget for my tabs and text, and canNOT get it to appear in FRONT of the video. The iframe z-index is set at 1 and position set at absolute. The spry is z-index 2 with position absolute. The video file the iframe is referencing is an .html (I heard iframes might have trouble with .php files).
    Also, the really confusing part is that the website works fine on a mac computer safari browser. But it won't work on my pc safari browser or googe chrome / mozilla / ie.
    Any ideas?
    thanks so much for the help.

    Not quite. Flash can be inserted into the stacking order by adding  adding the correct wmode. add
    <param name="wmode" value="transparent"></param>
    to your  flash file and wmode="transparent" in the embed,  and all positioned html elements will float over the flash object.
    This will also knock out the background of the flash object, allowing the html background to show through. wmode=opaque will also allow the stacking of the flash, but keep the flash background.
    Nancy O. wrote:
    Flash objects always rise to the top no matter which stacking order you use.  I'm afraid the only truly, cross-browser reliable way around this is to move your menus away from competing Flash objects.
    Related link:
    http://veerle.duoh.com/index.php/blog/comments/experimenting_with_flash_content_ and_z_index/
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists
    http://alt-web.com/
    http://twitter.com/altweb
    http://alt-web.blogspot.com

  • Apache2 config issue with CFMX8 on Ubuntu

    hi there -
    I have a new Ubuntu laptop I purchased from Zareason, (great
    deal, nice people). They were cool to install CFMX8 developer
    version on my laptop before shipping.
    My issue is this:
    The cfadmin works fine, so I know CF is at least up and
    running, and connected to the web server thru port 8500.
    so,
    http://localhost:8500/CFIDE/administrator/
    works just dandy.
    http://localhost/test.cfm
    (with some simple code) outputs to the browser the literal code
    The problem I have is that when I put a simple test .cfm page
    in my documentroot for apache (mine is /var/www), the CF code
    outputs straight to the browser. I know that SOME scripts execute
    from this directory, since I can run PHP scripts.
    Most of my config options for Apache2 seem to be in the
    /etc/apache2/apache2.conf and /etc/apache2/sites-available/default.
    I cant for the life of me figure out how to get a simple
    script to run from this dir. I've looked at permissions,
    configurations, etc and have come up empty.
    any ideas folks? Thanks in advance.
    hogan

    Well found the issue. Adapter config needs to have a parent config tag. It was missing in my case.
    <?xml version="1.0" ?>
    *<config>*
    <adapter type="scs" default="true" name="myadapter">
    <config>
    <property name="port">4444</property>
    <property name="host">161.222.84.128</property>
    <property name="type">socket</property>
    </config>
    <beans template="classpath:/META-INF/resources/adapter/adapter-services-scs.jxml"/>
    </adapter>
    *</config>*

  • Performance of using a "Like" Filter *is not due to straight index issue*

    Hello,
    We have just released an Oracle APEX app to the users and ran into a performance issue in the production version that was not in development. The interactive report goes against a simple view that returns Data similar to:
    NAME VALUE MIN MAX
    BEL123 1 1 3
    BEL245 3 3 4
    AB222 2 1 3
    The issue is that when a filter is used on the name column with a like operand (name LIKE 'BEL%' ). The query performance become very poor (average is 29 seconds to return according to the APEX stats). These are huge tables but the performance in the development area is quick (almost instantaneous). Here are the things I have looked into to try and narrow the problem down:
    1) I have tried running the exact same query in SQL*Plus and the performance is quick in both production and development, so its not the indexes.
    2) Creating brand new reports from scratch causes the same results in production and development. So its not anything specific that has been changed in the two versions (which are identical)
    3) Putting the NAME Like 'BEL%' directly into the SQL clause of the report produces quick returns in production so the problem is not the APEX user not seeing the indexes for some reason. It also ensures that any data differences in the two databases are not the root cause of this.
    So my only theory is that somehow the use of the Like filter does not allow the index on the NAME column to be used but only in production. Any ideas ... I am at a loss.
    Edited by: user491396 on Jan 7, 2009 2:18 PM

    g.myers wrote:
    Apex does run the query the same way a direct connection does. It gets parsed, bind variables are peeked and an optimizer plan produced.
    You have a query performance problem, not an Apex problem.
    Oracle manages all SQL this way, irrespective of where it comes from.
    Since we can only see the view, and not the full query (ie with all the tables, joins, predicates...) it is all pretty much guesswork for us.
    It could be either the initial bind value (ie what was in the variable originally queried). If it was '%', then as a predicate it is pretty much useless. Your application should either prevent a '%' in the first character, or generate a separate SQL for that. If the '%' was the second character, then (depending on data volumes and data distribution) that may also result in a poorly performing plan.
    Depending on db version you could look at v$sql_bind_capture for that SQL and see what values were used for peeking.
    How many rows in DT_SCADA_TAGS ? The full table scan entry suggests you have 75 million in DT_SCADA_VALUES
    The hash join between DT_SCADA_DATETIMES and DT_SCADA_VALUES suggests that there isn't a decent join path between the two (eg no date index on DT_SCADA_VALUES or one that indicates a low selectivity).Through testing the initial bind variable does not impact what the performance is ... even if its a full unique value without % it does not work any different. In this capture it was a full value 'BEL2395489'. There is 75 Million rows in DT_SCADA_VALUES. There is indexes on the dates and tag_names. I'll post the view for completeness at the end.
    The outstanding issue/knowledge from here is that an APEX Filter using LIKE may be handled differently then a LIKE within the initial SQL by the parser and may lead to .
    SELECT DT_SCADA_DATETIMES.DT_SCADA_DATETIME AS DT_SCADA_DATETIME,
           DT_SCADA_TAGS.TAG_NAME AS TAG_NAME,
           DT_SCADA_TAGS.TAG_DESCRIPTION AS TAG_DESCRIPTION,
           DT_SCADA_VALUES.ACTUAL_VALUE AS ACTUAL_VALUE,
           DT_SCADA_VALUES.MIN_VALUE AS MIN_VALUE,
           DT_SCADA_VALUES.MAX_VALUE AS MAX_VALUE,
           DT_SCADA_VALUES.AVG_VALUE AS AVG_VALUE,
           TRUNC(DT_SCADA_DATETIMES.DT_SCADA_DATETIME) AS DT_SCADA_DATE,
           TO_NUMBER(TO_CHAR(DT_SCADA_DATETIMES.DT_SCADA_DATETIME, 'HH24')) AS DT_SCADA_HR,
           TO_NUMBER(TO_CHAR(DT_SCADA_DATETIMES.DT_SCADA_DATETIME, 'MM')) AS DT_SCADA_MONTH,
           TO_NUMBER(TO_CHAR(DT_SCADA_DATETIMES.DT_SCADA_DATETIME, 'YYYY')) AS DT_SCADA_YEAR,
           TO_NUMBER(TO_CHAR(DT_SCADA_DATETIMES.DT_SCADA_DATETIME, 'DD')) AS DT_SCADA_DAY
      FROM
           DT_SCADA_TAGS   DT_SCADA_TAGS,
           DT_SCADA_VALUES DT_SCADA_VALUES,
           DT_SCADA_DATETIMES DT_SCADA_DATETIMES
    WHERE DT_SCADA_DATETIMES.DT_SCADA_DATETIME_ID =
           DT_SCADA_VALUES.DT_SCADA_DATETIME_ID
       AND DT_SCADA_TAGS.DT_SCADA_TAG_ID = DT_SCADA_VALUES.DT_SCADA_TAG_ID
    ORDER BY DT_SCADA_DATETIME DESC, DT_SCADA_TAGS.TAG_NAME

  • Serious 2.x media sync / ID3 indexing issue

    I've found a serious issue with the media indexer, I don't know if this has been reported before. Right now, my Pre 2 with webOS 2.1 is unusable as a mp3-player.
    First I've tried to sync music with iTunes on my Mac with "The Missing Sync". Everything was copied over, but I only had very few songs properly viewable in the music app. The rest was tagged as "unknown": artist, album, everything, even though the music is properly tagged.
    Then I tried to use "Salling Media Sync", the free version. After a restart it worked, I could see all my music with there proper tags, no "unknown" mess anymore. I then tried to copy music again manually, simple drag&drop. The new music was not recognized again! "Unknown" artist, album etc.
    I then deleted everything again, used a custom script for Apple iTunes to sync. Everything unkown again..
    Right now, only Salling Media Sync seems to work.. I'm not the only one experiencing this, I already asked in a German forum and there are more.
    Any idea? This is extremely annoing..
    Post relates to: Pre 2 p102ueu (Unlocked EU)

    was this a result of updating my iphone to ios 4.2?
    Yes. iOS 4.2.1 requires iTunes 10.1 or greater, which requires OS X 10.5.8 or greater.

  • Loads delayed due to Indexing Issue

    HI SAP GURUS ,
    Please suggest correct answer as we have an issue where Deletion / Creation of Index process takes long time to complete .
    Also how does the partitions in an Infocube grows & to Increase the performance what steps one should take .
    Full points will be rewarded for right answers
    Regards ,
    Subash Balakrishnan

    Hi Subash,
    Firstly your question sis vrey generic. It depends that it is taking time in creating Index on DSO or InfoCube.
    InfoCubes are more prone to Index creation issue than DSO, due to high number of uncompressed requests. However this also depnds on the database due to archicture:
    ORACLE:
    With Oracle database, If you have a cube with 1000 uncompressed requests and 10 dimensions (every dimension brings with it a local bitmap index) you have 1000 partitions for the table itself and 10 x 1000 index partitions which makes a total of 11.000 database objects. This amount of objects hurts most when updating statistics or dropping/recreating secondary indexes. In situations with a high load frequency you will most certainly run into problems in this area sooner or later.
    If you do a statistics update, all these 11.000 objects will receive new statistics which will take some time and may raise locking problems during execution or at least a serialization when changing the statistics fields in the database catalog tables.
    If you drop/recreate secondary indexes, all these objects must be removed or created in the database catalog which also is a serial operation and may again rise locking situations and/or long runtimes. Additionally there is lots of DB cost for returning the allocated disk space.
    You will not experience trouble from this direction in the beginning, when the number of partitions is low. But you will run into problems randomly at first, when the number of partitions increase, and permanently, after the number of partitions have exceeded some (not specifiable, system and context dependent) threshold.
    SQL Server:
    With SQL Server 2005 onwards, there is limitation of 1000 partitions for a table. And each loaded uncompressed request in InfoCube is a partition in SQL Server. In BW System, each uncompressed requests are loaded in one partition and once 1000 partition limit has been reached in a SQL Server database, it will continue writing each new request to the 1000th partition and will write an error to the system log (SQL Error 7719). This is done only to avoid a hard failure when loading data, it is not the recommended business process that one should keep loading requests to the last partition. Continuing to load to the 1000th partition could cause a performance problem later when trying to delete requests and creating indexes and updating statistics of the InfoCube.
    How to check number of partitions on the SQL server database:
    Execute report RSDD_MSSQL_CUBEANALYZE
    - Menu Settings => Expert mode
    - Press the Details button
    - Choose the number of minimum partitions (choose let say 500)
    - Press the button Start Checks
    This will display all the tables with more than 500 partitions in the database.
    So, to avoid this index creation issue for InfoCubes, it is suggested that one should keep the less number of uncomressed requests in the InfoCube. Compress the InfoCube with the latest request whereever possible.
    Hope this helps. Award points if helpful.
    Regards
    Tanuj

Maybe you are looking for

  • My voice command keeps turning on by itself even when my siri is disabled.

    my voice command feature keeps popping up in the middle of anything im doing, even when my siri is disabled. My ipod also plays music randomly and the time on my screen blinks a few times. Has anyone else had any of these problems? 

  • Can't recognize music files on nano

    when i plug in my nano to my computer it says that it cannot recognize any files on the nano and that i need to use the ipod updater to do a factory reset. I tried a factory reset...a manual reset....and updated all the programs on my computer. I don

  • Custom Theme Books Printing

    Does anyone know of who will print custom theme books created using Aperture ? Thanks in advance! George

  • Tiger to Lion mail conversion

    My trusty mirror door G4 (running OS 10.4) is going in to semi-retirement and will no longer be used to access e-mail accounts. A Mac Pro running Snow Leopard or Lion will now be used for that purpose. How can I move mailboxes from the old machine to

  • Red eye green - what if eyes are different color

    the red eye tool only allows for eyes to be reduced from red to green which works great on my friend with green eyes and it looks terrible on brown eyes is there not some way to switch the color??? it only makes common sense and yet I cannot find a w