What are communication structures like KOMK and KOMP ?

Hi,
What are the communication structures liek KOMK and KOMP?
What is their significance ?
Any material on them would be appreciated.
I have a requirement to move delivery item data to delivery header in creating shipment cost document. While doing this should I also change the values in KOMP also ?
Thank you,
Surya

Hi,
KOMK (Free goods determination - Communication header)
KOMP (Free goods determination - Communication item)
Or check these links....
http://help.sap.com/saphelp_40b/helpdata/es/13/7155967935d1118b3f0060b03ca329/content.htm
http://sapsdforum.com/2007/10/23/pricing-in-sd-in-great-detail/
http://sapsdforum.blogspot.com/
Reward points if helpful....
Regards
AK

Similar Messages

  • Looking for new laptop what are the differences between pro and air? Besides size. Does the air preform like the pro?

    Looking for new laptop what are the differences between pro and air? Besides size. Does the air preform like the pro?

    The NEW macbook Pro and Air are EXTREMELY close in form factor
    The newest macbook Pro is essentially a larger macbook Air with Retina display and options for speed in increasing prices up to an independent graphics and quad core processor.
    both Air and new Pro now have PCIe SSD and permanent RAM.
    The Air is the lightweight portable form factor, fast to boot and shut down, but with longer battery life than any of the macbook pro in 13"
    Now the new macbook Pro and macbook Air are extremely close in form factor and nature.
    both have 802ac wifi
    both have permanent RAM, no superdrive
    both are slim profiles and SSD
    The only real differences now are (in the most expensive Pros) faster processors and quadcore processors and top end model autonomous graphics.
    ....and of course the retina display
    both are now "very good for travel"
    Other than features the form factor of the Air and Pro are VERY close now,....so now its merely a matter of features and price more than anything.
    You need an external HD regardless of what you get for backups etc.   Drop into an Apple store and handle both and make your choice based on features, such as Retina or non-retina, .... both at a distance now look like the same computer.
    The Pro weighs more, ....but nowhere near what it used to just a month ago on the older macbook Pros
    The NEW macbook Pro is a different creature entirely than the older macbook Pro, .....the new Pro is thicker than the Air, but id frankly call the NEWEST Pro a "macbook Air with Retina display" , or
    Maybe a “macbook Air PRO with Retina display” 
    Instead of Air VS Pro now,.....its really a smooth transition from Air to pro without comparing say, 2 different creatures, now its like contrasting a horse from a race horse.
    Either one in 8gig of RAM (preferably)... the 4gig upgrade costs very little,  the I7 you will notice only 15% faster on heavy applications over the I5, and NOTHING on most APPS.....I5 has longer battery life.
    As you see below, the non-Retina 13" AIR is 82% of the Macbook with Retina display in resolution
    there is no magical number of pixels per inch that automatically equates to Retina quality.
    http://www.cultofmac.com/168509/why-you-might-be-disappointed-by-the-resolution- of-those-new-retina-display-macs-feature/
    A huge internal SSD isnt a game changer for anything, you need an external HD anyway
    what you WONT READ on Apple.com etc. is that the larger SSD  are MUCH FASTER due to SSD density
    "The 512GB Samsung SSD found in our 13-inch model offers roughly a 400MB/s increase in write speeds over the 128GB SanDisk/Marvell SSD"
    http://blog.macsales.com/19008-performance-testing-not-all-2013-macbook-air-ssds -are-the-same
    Here is an excellent video comparison between the 11” I5 vs. I7 2013 Macbook Air.
    http://www.youtube.com/watch?v=oDqJ-on03z4
    http://www.anandtech.com/show/7113/2013-macbook-air-core-i5-4250u-vs-core-i7-465 0u/2
    I5 vs. I7 performance 13” Macbook Air 2013
    Boot performance
    11.7 I5 ……11.4 I7
      Cinebench 
    1.1 I5….1.41 I7
    IMovie Import and Opt.
    6.69 I5….5.35 I7
      IMovie Export 
    10.33 I5…8.20 I7
    Final Cut Pro X
    21.47 I5…17.71 I7
      Adobe Lightroom 3 Export 
    25.8 I5….31.8 I7
    Adobe Photoshop CS5 Performance
    27.3 I5…22.6 I7
    Reviews of the newest Retina 2013 Macbook Pro
    13”
    Digital Trends (13") - http://www.digitaltrends.com/laptop-...h-2013-review/
    LaptopMag (13") - http://www.laptopmag.com/reviews/lap...play-2013.aspx
    Engadget (13") - http://www.engadget.com/2013/10/29/m...-13-inch-2013/
    The Verge (13") - http://www.theverge.com/2013/10/30/5...ay-review-2013
    CNet (13") - http://www.cnet.com/laptops/apple-ma...-35831098.html
    15”
    The Verge (15") - http://www.theverge.com/2013/10/24/5...w-15-inch-2013
    LaptopMag (15") - http://www.laptopmag.com/reviews/lap...inch-2013.aspx
    TechCrunch (15") - http://techcrunch.com/2013/10/25/lat...ok-pro-review/
    CNet (15") - http://www.cnet.com/apple-macbook-pro-with-retina-2013/
    PC Mag (15") - http://www.pcmag.com/article2/0,2817,2426359,00.asp
    Arstechnica (15") - http://arstechnica.com/apple/2013/10...-pro-reviewed/
    Slashgear (15") - http://www.slashgear.com/macbook-pro...2013-26303163/

  • What are the advantages of compressor and it it even necessary

    what are the advantages of compressor and it it even necessary?

    Necessary for some and not for others – probably a large majority – who can by with the presets avalaible in FCX.
    The users who need Compressor are those who want to control the parameters of the encodes to get the best possible trade-off between file size and quality. Or those who want to do things like standards conversions, complex frame speed changes, better re-scaling capabilities, de-interlacing, re-interlacing, output formats beyond which are available in FCX, chapter markers for DVD and Blu-Ray authoring, batch conversions for multiple purposes through droplets, access to clusters for faster rendering.
    Russ

  • What are the units of "Width" and "Height" of a Shape?

    What are the units of  "Width" and "Height" properties of a Shape when programming?
    Something odd like points or twips or tweedles or nibbles?
    http://www.ransen.com Cad and Graphics software

    Width and Height are properties of type Single; they represent the dimensions of the shape in points, where 72 points = 1 inch.
    Regards, Hans Vogelaar (http://www.eileenslounge.com)

  • What are the Relations between Journalizing and IKM?

    What is the best method to use in the following scenario:
    I have about 20 source tables with large amount of data.
    I need to create interfaces that join the source tables into target tables.
    The source tables are inserted every few secondes with about hundreds to thousands rows.
    There can be a gap of few seconds between the insert of different tables that sould be joined.
    The source and target tables are on the same Oracle instance and schema.
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?
    In general What are the relations between 'Journalizing' and 'IKM'?
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?
    I want to understand what is the role of 'Journalizing CDC'?
    Can 'IKM - Incremental Update' work without 'Journalizing'?
    Does 'Journalizing' need to have PK on the tables?
    What should i do if i can't put PK (there can be multiple identical rows)?
    Thanks in advance Yael

    Hi Yael,
    I will try and answer as many of your points as I can in one post :-)
    Journalizing is way of tracking only changed data in your source system, if your source tables had a date_modified you could always use this as a filter when scanning for changes rather than CDC, Log based CDC (Asynchronous in ODI, Logminer/Streams or Goldengate for example) removes the overhead of of placing a trigger on the source table to track changes but be aware that it doesnt fully remove the need to scan the source tables, in answer to you question about Primary keys, Oracle CDC with ODI will create an unconditional log group on the columns that you have defined in ODI as your PK, the PK columns are tracked by the database and presented in a Journal table (J$<source_table_name>) this Journal table is joined back to source table via a journalizing view (JV$<source_table_name>) to get the rest of the row (ie none PK columns) - So be aware that when ODI comes around to get all data in the Journalizing view (ie Inserts, Updates and Deletes) the source database performs a join back to the source table. You can negate this by specifying ALL source table columns in your PK in ODI - This forces all columns into the unconditional log group, the journal table etc. - You will need to tweak the JKM to then change the syntax sent to the database when starting the journal - I have done this in the past, using a flexfield in the datastore to toggle 'Full Column' / 'Primary Key Cols' to go into the JKM set up (there are a few Ebusiness suite tables with no primary key so we had to do this) - The only problem with this approach is that with no PK , you need to make sure you only get the 'last' update and in the right order to apply to your target tables, without so , you might process the update before the insert for example, and be out of sync.
    So JKM's provide a mechanism for 'Change data only' to be provided to ODI, if you want to handle deletes in your source table CDC is usefull (otherwise you dont capture the delete with a normal LKM / IKM set up)
    IKM Incremental update can be used with or without JKM's, its for integrating data into your target table, typically it will do a NOT EXISTS or a Minus when loading the integration table (I$<target_table_name>) to ensure you only get 'Changed' rows on the load into the target.
    user604062 wrote:
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?Hopefully I have explained it above, its the type of thing you really need to play around with, and throroughly review the operator logs to see what is actually going on (I think this is a very good guide to setting it up : http://soainfrastructure.blogspot.ie/2009/02/setting-up-oracle-data-integrator-odi.html)
    In general What are the relations between 'Journalizing' and 'IKM'?JKM simply presents (only) changed data to ODI, it removes the need for you to decide 'how' to get the updates and removes the need for costly scans on the source table (full source to target table comparisons, scanning for updates based on last update date etc)
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?Delete and insert into target is fine , but ask yourself how do you identify which rows to process, inserts and updates are generally OK , to spot a delete you need to compare the table in full, target table minus source table = deleted rows , do you want to copy the whole source table every time to perform this ? Are they in the same database ?
    I want to understand what is the role of 'Journalizing CDC'?Its the ODI mechanism for configuring, starting, stopping the change data capture process in the source systems , there are different KM's for seperate technologies and a few to choose for Oracle (Triggers (Synchronous), Streams / Logminer (Asynchronous), Goldengate etc)
    Can 'IKM - Incremental Update' work without 'Journalizing'?Yes of course, Without CDC your process would look something like :
    Source target ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    With CDC your process looks like :
    Source Journal (J$ table with JV$ view) ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    as you can see its the same process after the source table (there is an option in the interface to enable the J$ source , the IKM step changes with CDC as you can use 'Synchronise Journal Deletes'
    Does 'Journalizing' need to have PK on the tables?Yes - at least a logical PK in the datastore, see my reply at the top for reasons why (Log Groups, joining back the J$ table to the source table etc)
    What should i do if i can't put PK (there can be multiple identical rows)? Either talk to the source system people about adding one, or be prepared to change the JKM (and maybe LKM, IKM's) , you can try putting all columns in the PK in ODI. Ask yourself this , if you have 10 identical rows in your source and target tables, and one row gets updated - how can you identify which row in the target table to update ?
    >
    Thanks in advance YaelA lot to take in, as I advised I would reccomend you get a little test area set up and also read the Oracle database documentation on CDC as it covers a lot of the theory that ODI is simply implementing.
    Hope this helps!
    Alastair

  • What are the differences between api and sdk

    Hi,
    Could anybody clarify for me what are the difference between API and SDK? I googled for the answer, and couldn't find any thing on this topic.
    Many thanks in advance.
    javasfan

    Is it correct to say that "a SDK includes all the
    APIs"? or "API sits on top of the SDK"?It's a bit weird to say either. First, the JDK doesn't include all APIs. Just the J2SE core API. Others, like 3rd party libraries or J2EE are not included. Second, if you mean the API docs, they're also not included IIRC.
    It'd technically only be correct to say "the SDK provides an API", I guess. The example is very lame, but: if the SDK is a machine, the API is the sum of its buttons and levers and gauges and intakes and outlets and exhausts. The API docs are the manual.

  • What are the differences between inactive and active ABAP objects?

    Can anybody tell me what are the differences between inactive and active ABAP objects?
    In my opinion,  an active object is compiled and system wide available, that means the system do not have to compile the program again before run or use the object. While An inactive object is not system wide available and every time you run an inactive object, firstly the abap runtime will have to  generate a tempory runtime object and this inactive object can not seen by others.
    Am I right? Can anybody kindly tell me other differences?

    Hi,
    "When it is inactive, it is like it would not exist at all:" no - it's like it only exists to you
    "If we just saved that one means it is stored in application server not in database": no - the inactive version is also stored in the database. You can log off and log on and it will still be there, in its inactive status.
    "Only active objects can be executed.": no - inactive objects can be executed by you
    When you create or modify a program, it is inactive until you activate it.
    With a change, there are two versions of the program stored in the database - the active version (as it was before you made your change), and the inactive version. If you attempt to run the program, you'll run the inactive version - the one with your changes. Everyone else on the system will run the active version.
    In this way, you can make changes without affecting anyone else.
    Once you activate your program, then the inactive version becomes the active version.
    With a create, there is no active version, until you hit the activate button. This means ONLY you can run the program.
    An additional benefit of this model, is that if you make a change, save it, and then change your mind without activating, you can recover the active version into the editor, using version management.
    A downside is that sometimes you have to activate your change before you can test it, if it interacts with other, active, programs.
    Regards,
    Kumar

  • What are the differences between StreamConnection and SocketConnection?

    hi guys.
    i desperately needs some explanation. i have google and the results are nt useful
    What are the differences between StreamConnection and SocketConnection?
    i have a j2se server using a serverconnection.
    the articles on the web are contradictory as a result i am really confuse and dont know which one i should use in creating the socket connection in my j2me midlet
    PLS Pls help me .

    hi, a socketConnection is a subclass of StreamConnection. In other word, a StreamConnection is an interface (template) to improve more specific connection like : FileConnection, HttpConnection, ...
    If you have to connect a specific server, use SocketConnection, if it's a HTTP server use HttpConnection ...

  • What is the use of AET? What are the differences between AET and EEWB?

    Hi,
    I would like to know about AET? What is the use of AET? What are the differences between AET and EEWB? Please help me out?
    Thanks,
    Satish

    Hi
    You can refer the following links for your question.
    Difference between AET and EEWB
    What is the use of AET? What are the differences between AET and EEWB?
    Difference between EEWB - UI Configuration Tool - AET
    http://senthilsapcrm.wordpress.com/2010/02/04/adding-custom-fields-in-sap-crm-7-0-using-aet/
    What is the main difference between eewb and aet tool ?
    Hope it is useful.
    Thanks and regards
    Preeti Viswanath

  • What are the speeds like?

    Hi guys, im currently on o2 broadband and its slow as and im looking to upgrading to infinity in july when its in my area (guildford) and i want to ask what are the speeds like when playing on xbox live, psn, downloading torrents and downloading from file upload sites such as megaupload ect. most of the usage of our home internet is used as online gaming and downloading from file upload sites. We rarely use torrents.
    Thank you!

    You would like a Graphic off me???????????? Having said that I've just checked and for some reason Fri & Sat has gone up to 28ms. I've got lots of graph's to show this but it would take up to much space on here. This is my dashboard so you can see.:-
    iechyd da
    sky twitter account a customer ask why sky go streams are worst then sd and yet bt and eurosports apps stream in hd. Reply from mod, oh thats easy the files for hd are some huge that sky go can't play them and no app can stream hd due to this and so when they say they are they're really sd streams, if there was any way around this we would have done it now.

  • What are Header Level, Item Level and Schedule Level data?

    Hi ,
    Can anyone plz explain me what are Header Level, Item Level and Schedule Level Data in the R/3 system , means what actually is the data structure they contain. If is there any document or links available plz do send. Urgent.
    Thanks
    Prashant singhal

    Hi Prashant,
      check this link.
    [Extractors;
    Regards,
    Harold.

  • GCF what are the functions of GCF and where can i find that

    Hello All,
    Can anyone throw some light on what the user expecting..What is GCF what are the functions of GCF and where can i find that
    user would like  to have the list of customers who have made  modification by the GCF (eg modification of the subscription period, the amount, the contract ended ....).
    Thanks in advance
    Srikanth Ravinutala

    Hello Dinakar,
    Please read this before closing your old threads.
    Read This Before Closing your Threads
    Thanks,
    Jignesh Mehta

  • What are the versions of BW and what is the difference between them

    what are the versions of BW and what is the difference between them

    Hi Reddy,
    SAP BIW 2.0a, 2.0b
                   3.0a, 3.b
                   3.1c
                   3.5  and Now BI 7 are some of the versions.
    Major difference between BW3.5 and BI 7.0 versions:
    1. In Info sets now you can include Infocubes as well.
    2. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube.
    3. The BI accelerator (for now only for Infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accelerator is a separate box and would cost more.
    4. The monitoring has been improved with a new portal based cockpit. Which means you would need to have an EP guy in ur project for implementing the portal.
    5. Search functionality hass improved!! You can search any object. Not like 3.5
    6. Transformations are in and routines are passé! Yes, you can always revert to the old Tcodes.
    7. The *Data Warehousing Workbench *replaces the Administrator Workbench.
    8. Functional enhancements have been made for the Data Store object:
    New type of Data Store object, Enhanced settings for performance optimization of Data Store objects.
    9. The transformation replaces the transfer and update rules.
    10. New authorization objects have been added
    11.*Remodeling *of Info Providers supports you in Information Lifecycle Management.
    12 the DataSource: There is a new object concept for the DataSource .
    Options for direct access to data have been enhanced.
    From BI, remote activation of Data Sources is possible in SAP source systems.
    13. There are functional changes to the Persistent Staging Area (PSA).
    14. BI supports real-time data acquisition.
    15. SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise Data Warehousing (EDW). The new features/ Major differences include:
    a) Renamed ODS as Data Store.
    b) Inclusion of Write-optimized Data Store which does not have any change log and the requests need no activation
    c) Unification of Transfer and Update rules
    d) Introduction of "end routine" and "Expert Routine"
    e) Push of XML data into BI system (into PSA) without Service API or Delta Queue
    f) Introduction of BI accelerator that significantly improves the performance.
    g) Load through PSA has become a must. Info Packages are used to load data upto PSA only.
        You need to create DTP to update data from PSA to Data Target.
    Regards,
    Ram.

  • What are the settings for datasource and infopackage for flat file loading

    hI
    Im trying to load the data from flat file to DSO . can anyone tel me what are the settings for datasource and infopackage for flat file loading .
    pls let me know
    regards
    kumar

    Loading of transaction data in BI 7.0:step by step guide on how to load data from a flatfile into the BI 7 system
    Uploading of Transaction data
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( Transaction data )
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to create ODS( Data store object ) or Cube.
    • Specify name fro the ODS or cube and click create
    • From the template window select the required characteristics and key figures and drag and drop it into the DATA FIELD and KEY FIELDS
    • Click Activate.
    • Right click on ODS or Cube and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.
    4. Monitor
    Right Click data targets and select manage and in contents tab select contents to view the loaded data. There are two tables in ODS new table and active table to load data from new table to active table you have to activate after selecting the loaded data . Alternatively monitor icon can be used.
    Loading of master data in BI 7.0:
    For Uploading of master data in BI 7.0
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( master data attributes, text, hierarchies)
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to select Insert Characteristics as info provider
    • Select required info object ( Ex : Employee ID)
    • Under that info object select attributes
    • Right click on attributes and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.

  • What are the functions of look and feel files?

    Hi,
    If some one can explain me what are individual role in look and feel of the ISA B2C application.
    mainFS.jsp
    main_inner.jsp
    catalogFS.jsp
    accountFS.jsp
    refresherB2C.jsp
    I want to know function of each files. If I change any file then where exactly I can see changes which I made.
    Any help from any one highly appreciated.
    Thanks.
    Ashish

    Hi,
    If you get hold of the ISA Development and Extension guide there is a section highlighting which frames go where in the main application.
    You can also see which .jsp pages are displayed where by adding the following parameter to the root URL of the application when you logon.
    http://<host>:<port>/b2c/b2c/init.do?showmodulename=true
    This will display the name of each jsp page in the frames on screen.
    Also, surely you must have been asked to change certain aspects of the application - not specific files?  You shouldn't just change the files and then "see what happens"...
    Hope this helps,
    Gareth.

Maybe you are looking for

  • Data in the Cube not getting aggregated

    Hi Friends We have Cube 1 and Cube 2. The data flow is represented below: R/3 DataSource>Cube1>Cube2 In Cube1 data is Stored according to the Calender Day. Cube2 has Calweek. In Transformations of Cube 1 and Cube 2 Calday of Cube 1 is mapped to Calwe

  • Options for receiving goods in WM

    Hi all. It is the first time that I am customizing WM for a certain warehouse in a plant. Customer requirement is mostly to manage storage bins (not complicated for the moment) but they prefer to have manual control, so they decide the bin where the

  • Synchronize folder not working in Lightroom 5.7

    When I synchronize a folder in Lightroom 5.7 it shows 'Task completed', but nothing happens and Lightroom does not respond anymore. Any advice? Running on Windows 8.1..

  • Shopping cart Issues

    Hi Its SRM 3.0 implemented ( Classic scenario). Now the problem is when i create shopping cart with more than 1 line item and in one line item i choose item from (Dell Catalogue) and i order the cart then separate PO is getting created for line item

  • Since I switched to Firefox 5.0, my Bejeweled Blitz game is running very slowly!! Is Firefox 5.0 the problem???

    I have tried to get help on Bejeweled Blitz site but no answer. Could the Firefox 5.0 be incompatible with the Bejeweled Blitz game on Facebook??