Long HDMI runs

Hi
Has anyone experienced issues with the apple TV and long HDMI runs?
I have a 15m cable connected to a Samsung TV which is working fine with other source equipment such as my laptop but shows up as no signal when I connect the apple tv.
I know the apple TV works as I have tested it out and all is fine. Its only when I connect the longer cable the signal seems to struggle
Any ideas?

Uniform diameter and tolerance of the length of each component strand of cable in a HDMI cable is somewhat critical.
Changes of diameter along a strand can cause impedance differences and affect the signal throughput and in some cases cause the signal to bounce back to the sending device, each strand must be within 1/20,000 th (or 1/80,000th, can't remember) inch in length of all the others or otherwise the signal may become out of sync.
Ensuring that these tolerances are met in long cables is more difficult and results in a higher rejection rate, resulting in higher costs.
Also natural expansion and contraction in long cables has more of an effect on these tolerances than it does on shorter ones, generally this is what repeaters are for, to eliminate long cables which are generally considered ill advised.

Similar Messages

  • Sql query taking too long to run

    I am not sure what to do. My app takes two long to run and the reason is right in front of me, but again, I don't know what to do or where to go. (VB.Net VS 2005)
    The main part of my query takes about 15 to 20 seconds to run. When I tack on this other part it slows the response to over 2 minutes.
    Even running this other part alone takes 2+ minutes. The query as two sum functions. Is it possible, some how, that this query can run building it's results into another table and then I use this new table with the results all ready to go and join into the first part of the main query?
    I am using oracle 9i with a Sql Developer. Which I am new to. Below is the culprit: Thanks
    Select adorder.primaryorderer_client_id,
    (sum(Case When insertion.insert_date_id >= 2648 and insertion.insert_date_id < 2683
    Then insertchargedetail.Amount_insertDetail Else 0 End)) As CurRev,
    (sum(Case When insertion.insert_date_id >= 2282 and insertion.insert_date_id < 2317
    Then insertchargedetail.Amount_insertDetail Else 0 End)) As LastRev
    from Adorder
    Inner Join insertion On Adorder.id=insertion.Adorder_id
    Inner Join insertchargesummary On insertion.id=insertchargesummary.insertion_id
    Inner Join insertchargedetail On insertchargesummary.id=insertchargedetail.insertchargesummary_id
    where ((insertion.insert_date_id >= 2282 and insertion.insert_date_id < 2317)
    Or (insertion.insert_date_id >= 2648 and insertion.insert_date_id < 2683))
    group by adorder.primaryorderer_client_id;

    How to post a tuning request:
    HOW TO: Post a SQL statement tuning request - template posting

  • Long OLAP running time on portal

    Hi Experts,
    I have a portal report (four queries) which takes very long to run and from the rsults of RSRDstat, one of the queries takes 1000 secs on OLAP whereas other queries are fine.
    I tried to tested the report in Analyzer, IB and portal. It appears there are only about 20 to 40 secs OLAP time for the problematic query in Analyzer and IB and it is faster if it reads cache. However, it takes about 1000 secs in portal no matter it reads cache or not.
    I have enable the query property option of structure selection and cal key figures (The problematic query has a big structure with cal key figure) and it did improve slightly.
    I dont think users would want to change the query.
    any ideas? Is there possible problem with Portal?
    Thanks
    Feng

    With Portal you mean Webreport or the BI Portal?

  • Webi-servers restart every 2hours when long publication running

    I have 4 webi server.
    When long publication running greater 2 hour, webi servers is restart.
    all four servers restart simultaneously
    I try tunning  "Timeout Before Recycling (seconds):" and "Maximum Documents Before Recycling:" but no result.
    webi, webi1
      Timeout Before Recycling (seconds):1200
      Maximum Documents Before Recycling: 50  : 1000(webi1)
      Enable Memory Analysis: disable
    webi10, webi11
      Timeout Before Recycling (seconds):7200
      Maximum Documents Before Recycling:100
      Enable Memory Analysis: enable
    How to configure webi servers?
    Need for them to work in the business hours, and not restart
    BIP SP4 Patch6 Windows

    increase the heap memory of aps
    What is type APS ? APS divided into multiple servers.
    BISRV1.AdaptiveJobServer
    -javaargs "Djava.awt.headless=true,Dcom.busiessobjects.mds.cs.ImplementationID=csEX,XX:MaxPermSize=512m,Xmx8g,Dbusinessobjects.connectivity.directory=C:/Program Files (x86)/SAP BusinessObjects/SAP BusinessObjects Enterprise XI 4.0//dataAccess/connectionServer"
    BISRV1.APS_DSLBRIDGE
    -Xms2g  -Xmx16g
    BISRV1.APS_Publishing_One
    -Xmx8g
    BISRV1.APS_Publishing_Post_Processing_One
    -Xmx8g
    BISRV1.APS_Publishing (Publishing_One + Publishing_Post_Processing_One)
    -Xmx4g
    Other: two Visualization APS, Search APS, LCM APS

  • Update statement takes too long to run

    Hello,
    I am running this simple update statement, but it takes too long to run. It was running for 16 hours and then I cancelled it. It was not even finished. The destination table that I am updating has 2.6 million records, but I am only updating 206K records. If add ROWNUM <20 to the update statement works just fine and updates the right column with the right information. Do you have any ideas what could be wrong in my update statement? I am also using a DB link since CAP.ESS_LOOKUP table resides in different db from the destination table. We are running 11g Oracle Db.
    UPDATE DEV_OCS.DOCMETA IPM
    SET IPM.XIPM_APP_2_17 = (SELECT DISTINCT LKP.DOC_STATUS
    FROM [email protected] LKP
    WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND
    IPM.XIPMSYS_APP_ID = 2
    WHERE
    IPM.XIPMSYS_APP_ID = 2;
    Thanks,
    Ilya

    matthew_morris wrote:
    In the first SQL, the SELECT against the remote table was a correlated subquery. the 'WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND IPM.XIPMSYS_APP_ID = 2" means that the subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated. This might have meant thousands of iterations, meaning a great deal of network traffic (not to mention each performing a DISTINCT operation). Queries where the data is split between two or more databases are much more expensive than queries using only tables in a single database.Sorry to disappoint you again, but with clause by itself doesn't prevent from "subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated". For example:
    {code}
    SQL> set linesize 132
    SQL> explain plan for
    2 update emp e
    3 set deptno = (select t.deptno from dept@sol10 t where e.deptno = t.deptno)
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3247731149
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    Remote SQL Information (identified by operation id):
    3 - SELECT "DEPTNO" FROM "DEPT" "T" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
    16 rows selected.
    SQL> explain plan for
    2 update emp e
    3 set deptno = (with t as (select * from dept@sol10) select t.deptno from t where e.deptno = t.deptno)
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3247731149
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    Remote SQL Information (identified by operation id):
    3 - SELECT "DEPTNO" FROM "DEPT" "DEPT" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
    16 rows selected.
    SQL>
    {code}
    As you can see, WITH clause by itself guaranties nothing. We must force optimizer to materialize it:
    {code}
    SQL> explain plan for
    2 update emp e
    3 set deptno = (with t as (select /*+ materialize */ * from dept@sol10) select t.deptno from t where e.deptno = t.deptno
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3568118945
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 87 (17)| 00:00:02 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL | EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | TEMP TABLE TRANSFORMATION | | | | | | | |
    | 4 | LOAD AS SELECT | SYS_TEMP_0FD9D6603_1CEEEBC | | | | | | |
    | 5 | REMOTE | DEPT | 4 | 80 | 3 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    |* 6 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 | | |
    | 7 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6603_1CEEEBC | 4 | 80 | 2 (0)| 00:00:01 | | |
    Predicate Information (identified by operation id):
    6 - filter("T"."DEPTNO"=:B1)
    Remote SQL Information (identified by operation id):
    PLAN_TABLE_OUTPUT
    5 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'SOL10' )
    25 rows selected.
    SQL>
    {code}
    I do know hint materialize is not documented, but I don't know any other way besides splitting statement in two to materialize it.
    SY.

  • SG 300-10's not performing as well as generic 10/100 switches over long cable run

    We have recently replaced three  - 10/100 Netgear or DLink switches at a customers site with three - SG 300-10 Gb switches. We are having a bandwidth problem with the connection between two of the switches. The customers says that the connection is slower than it was when we were using just the 10/100 switches. Here are the connection details -
    The cat5 cable length is approx 374 feet (tested with cable tester and all wires are connected)
    I have one end of network cable plugged in to port 10 on one SG300 and the other end plugged in to port 10 on another SG300. When I have the port settings set to "Automatic", I do not get any link light at all. When I set the one end to "100m Full Duplex", the switch at the other end shows that it sets itself to "100 Half Duplex", I get a link light and the switches pass network traffic between one another. Since 100 M seems to work, I tried to set both ends at 100 M full and 100 M half but have had no luck getting any better bandwidth than when one switch is set to 100M Full and the other is set to Auto.
    The issue is that they were getting better bandwidth with the old switches.
    Are there port settings that I can change so that these two switches will work better with one another at the 10/100 speed? I would think I could at least duplicate the speed they were getting with the cheaper switches.

                       Thanks Siva
    My results are attached in 2 text files. The remote switch is the one at the far end of the long cable run. The Middle switch is in the "Middle" of the network and is connected to another SG300 switch in the server room.
    In addition to the attached files. I received this message during my telnet session to both switches -
    switcha5eedb#09-May-2013 09:05:49 %CDP-W-DUPLEX_MISMATCH: Duplex mismatch detected on interface gi10.
    Message was edited by: Larry Broering
    Message was edited by: Larry Broering

  • Report taking a lot longer to run in Live environment

    Hi,
    I have created a report in Discoverer Plus and in the Test environment this report takes approx 5-10secs to run.
    I have exported the report using Discoverer administrator and imported it into the Live environment.
    When I go to run this report in Live it is taking considerably longer to run, it is taking over 45 minutes.
    Has anyone experienced this issue before and how did you manage to fix it?
    Many Thanks
    Martin

    I dont think I have seen that poor of performance arise but one of my reports double in time and it turned out
    1) There was an INDEX missing in my production system
    2) Statistics needed to be ran on a couple of the tables
    The worst slow down I saw was when the DBA's had brought up my TEST system. All reports were running slower than their DEVL counter parts but this dealt with an .init setting the DBA's forgot to add but I cannot envision that being the case for you on your LIVE system.
    Hope this helps.

  • 3.x Workbooks taking longer and longer to run each week

    Hey all,  I have a user who has embedded 5 versions of the same query into a workbook.  He runs this workbook every monday.  When he first created the workbook it took 30 minutes to run.  Each week that goes by the workbook takes longer and longer to run and eventually gets to the runtime of 2 hours.  Periodically my user has to go and make a change to the workbook and after he recreates the workbook then it goes back to taking 30 minutes to run.
    Is there some kind of a buffer that is filling up that I don't know about?  Is there a way I can refresh the workbook so that the runtime doesn't creep like it is doing?
    Thanks
    Adam

    Guess I posted prematurely. Looking closer I realized there was a select happening during this process against a text column without an index. The slowdown was just the increasing cost of looping through the the entire dataset looking at strings that often shared a fairly sizable starting substring. Chalk another problem up to the importance of appropriate indexes in your db!

  • DELETE Statement takes long time running

    Hi,
    DELETE statements take a very long time to complete.
    Can you advice me how can I diagnostic the slow performance
    Thanks

    Deleting rows can be an expensive operation.
    Oracle stores entire row as the 'before image' of deleted information (table and index data) in
    rollback segments, generates redo (keeps archiver busy), updates free lists for blocks that are
    falling below PCTUSED setting etc..
    Select count(*) runs longer because oracle scans all blocks (upto the High Water Mark) whether
    there are or are not any rows in the blocks.
    These operations will take more and more time, if the tables are loaded with APPEND hint or
    SQL*Loader using direct mode.
    A long time ago, I ran into a similar situation. Data was "deleted" selectively, SQL*Loader was
    used (in DIRECT mode) to add new rows to the table. I had a few tables using more number of blocks
    than the number of rows in the table!
    Needless to say, the process was changed to truncate tables after exporting/extracting required
    data first, and then loading the data back. Worked much better.

  • MFC database compact - how long to run ?

    Our MFC server ran out of space yesterday so we've had to do a compact on the database. I followed the instructions on the knowedlge base, answer ID 691, backed it up to another server (mapped a drive and redirected output to that drive) and now trying to restore the database.
    The restore has been running for about 12 hrs now but has not completed yet by the look of it (the cursor on the DOS screen has not returned to a prompt yet).
    Does anyone know
    a) Roughly how long it would take to restore a 7Gb MFC database. I realise we are restoring from a file on another network drive and this may slow it down and it also depends on the speed on of the disk subsystem being written to but a rough idea of time would be good.
    b) Are there any log files I can check to see if the restore is still happening or something has gone wrong.
    I set the initial database size in the my.cnf file to 10Gb and on restarting the MYSQL service the database size went to 10Gb without any data.
    On reflection I suppose if I'd set the initial database size to 1Gb I would have been able to see if the database was growing then the restore would be working, but as I set it to 10Gb and there is only 7Gb's worth of data I'm not going to see the file size increase.
    The date/time attributes on the file are constantly changing so is this an indication the file is being written to ?
    The knowledge base says it will take a long time to run, but I'm not sure how much a long time is ? I'm a bit reluctant to stop it just in case it's nearly complete.

    Can you login to the mysql prompt and run 'show processlist;' to see what commands are being executed?

  • How to analysis long time running statement

    Hi expert,
    please check following statement, it ran for a long time recent days, will you please instruct how to do analysis to find out the root cause for its long running. in it, function hiroc_get_delta_amount and large table rmv_policy_premium have caused this long runing time, but I still want to get more fault information from system side.
    select
    pp.policy_premium_pk,
    pp.policy_fk,
    pp.policy_term_fk,
    pp.risk_fk,
    pp.coverage_fk,
    pp.transaction_log_fk,
    pp.coverage_component_code,
    hiroc_rpt_user.hiroc_get_delta_amount(pp.policy_fk, pp.policy_term_fk, pp.risk_fk, pp.coverage_fk, pp.transaction_log_fk, pp.coverage_component_code),
    pp.rate_period_from_date
    from proddw_mart.rmv_policy_premium pp
    where pp.rate_period_type_code = 'TERM_COVG'
    and pp.coverage_component_code <> 'NETPREM'
    -- and pp.premium_amount <> 0
    -- *** Following line is included for faster performance
    and hiroc_rpt_user.hiroc_get_delta_amount(pp.policy_fk, pp.policy_term_fk, pp.risk_fk, pp.coverage_fk, pp.transaction_log_fk, pp.coverage_component_code) != 0
    group by pp.policy_premium_pk,
    pp.policy_premium_pk,
    pp.policy_fk,
    pp.policy_term_fk,
    pp.risk_fk,
    pp.coverage_fk,
    pp.transaction_log_fk,
    pp.coverage_component_code,
    pp.rate_period_from_date
    Many Thanks,

    843178 wrote:
    -- *** Following line is included for faster performance
    and hiroc_rpt_user.hiroc_get_delta_amount(pp.policy_fk, pp.policy_term_fk, pp.risk_fk, pp.coverage_fk, pp.transaction_log_fk, pp.coverage_component_code) != 0 That's an intersting piece of code.
    Apparently, by adding a user defined function into a query, that's going to improve performance according to the comment.
    In truth, it's going to add context switching and slow the query down.
    As well as posting the information required (as detailed by the FAQ posts), you will also need to ensure you provide detials of what that function does. If the functionality can be done in SQL rather than a PL/SQL function, then that will certainly improve things.

  • Normal accruals and accruals reruns takes longer to run

    Hi
    I am in Banking and i have a problem with the Accrual runs.They take a long time to run and complete.
    Can somebody assist on where i can look to fix this problem.

    Hi, refer to note 1387986

  • SQL-Command to long to run with SQL*Plus

    Hello to everyone,
    I'm creating a dynamic SQL Script to run with SQL*Plus.
    The Script contains only INSERT Command.
    The point is that the SQL*Plus supports only SQL-Strings (Commands)
    not longer than 2500 Characters.
    I've considered to split the String in Insert- and Update-Command(s),
    but I've would like first to know if there any simpler way to run these
    Commands on SQL*Plus.
    thanx in Advance
    bm

    SQL> create table t(x varchar2(4000));
    Table created.
    SQL> insert into t values (
    '1234567890........0........0........0........0........0........0........0........0......100' ||
    '1234567890........0........0........0........0........0........0........0........0......200' ||
    '1234567890........0........0........0........0........0........0........0........0......300' ||
    '1234567890........0........0........0........0........0........0........0........0......400' ||
    '1234567890........0........0........0........0........0........0........0........0......500' ||
    '1234567890........0........0........0........0........0........0........0........0......600' ||
    '1234567890........0........0........0........0........0........0........0........0......700' ||
    '1234567890........0........0........0........0........0........0........0........0......800' ||
    '1234567890........0........0........0........0........0........0........0........0......900' ||
    '1234567890........0........0........0........0........0........0........0........0.....1000' ||
    '1234567890........0........0........0........0........0........0........0........0.....1100' ||
    '1234567890........0........0........0........0........0........0........0........0.....1200' ||
    '1234567890........0........0........0........0........0........0........0........0.....1300' ||
    '1234567890........0........0........0........0........0........0........0........0.....1400' ||
    '1234567890........0........0........0........0........0........0........0........0.....1500' ||
    '1234567890........0........0........0........0........0........0........0........0.....1600' ||
    '1234567890........0........0........0........0........0........0........0........0.....1700' ||
    '1234567890........0........0........0........0........0........0........0........0.....1800' ||
    '1234567890........0........0........0........0........0........0........0........0.....1900' ||
    '1234567890........0........0........0........0........0........0........0........0.....2000' ||
    '1234567890........0........0........0........0........0........0........0........0.....2100' ||
    '1234567890........0........0........0........0........0........0........0........0.....2200' ||
    '1234567890........0........0........0........0........0........0........0........0.....2300' ||
    '1234567890........0........0........0........0........0........0........0........0.....2400' ||
    '1234567890........0........0........0........0........0........0........0........0.....2500' ||
    '1234567890........0........0........0........0........0........0........0........0.....2600' ||
    '1234567890........0........0........0........0........0........0........0........0.....2700' ||
    '1234567890........0........0........0........0........0........0........0........0.....2800' ||
    '1234567890........0........0........0........0........0........0........0........0.....2900' ||
    '1234567890........0........0........0........0........0........0........0........0.....3000'
    1 row created.try to break your query in multiple lines in order to avoid
    SP2-0027: Input is too long (> 2499 characters) - line ignored

  • I replaced my HD in my iMac, now my Apple TV (1 gen) won't sync with iTunes, I turned on sharing on iTunes now iTunes wants to stream everything to my Apple TV. I can not longer manually run sync in iTunes?

    I replaced my HD in my iMac, now my Apple TV (gen 1) won't sync with iTunes, it does not even show up in the device section of iTunes.  I turned on sharing in iTunes now my Apple TV shows up in iTunes device section and when I click on it iTunes shows me a streaming sync page as if my Apple TV were a gen 2 unit.  ITunes  wants to stream everything to my Apple TV. I no longer can manually sync my iTunes with my Apple TV.   In the Apple TV menu when I select computers Apple TV shows me my old hard drive with a lock next to it, when I click on it I get a warning that if I continue I will erase the entire Apple TV.  Do I need to erase the AppleTV and start from an emty Apple TV HD?

    you have to go to your atv's settings and remove the sync setup with the in the atvs eyes "old" computer
    before you can add the "new" computer as a sync computer
    an atv can only be set to sync with 1 and it don't know the "old" computer is not just off

  • IPod nano 6th generation no longer synching runs to Nike+

    Just got a pop-up when I connected my iPod nano (6th generation) that stated something that workout data would no longer be sent to Nike+ because iTunes no longer supports pedometer devices.  I can't seem to find any details about why I can no longer synch workouts, from either Apple or Nike.
    Anyone know what's going on with this?

    I found this info on another thread and it worked for me.
    freebaharJul 18, 2014 11:01 AM Re: Sync 7 Generation iPod Nano with nike+
    Re: Sync 7 Generation iPod Nano with nike+in response to mccone
    I had the same problem.
    Here is what i did and i manage to fix it.
    Un tick the option connect automatically to nikeplus.com from your itunes
    Click Apply or sync on the down left corner.
    Then click connect automatically to nikeplus.com and
    Click Apply or sync on the down left corner
    I hope it will work for you as well.

Maybe you are looking for