Struggling with Z8 focus tracking

If you would like to post, you'll need to register. Note that if you have a BCG store account, you'll need a new, separate account here (we keep the two sites separate for security purposes).

Stick with me for a moment and I'll throw another confounding variable into the mix. Walked over to the pond yesterday to see my Trumpeter friends, who graciously agreed to another photo shoot (this one terminated early by an inconsiderate lady "walking" her dog off the leash). This time I had the 186 on one body and the 70-180 f/2.8 on the other. Both bodies were configured the same way and had both lenses dialed into 180mm f/6.3. Starting again in AA mode, SD, I observed some interesting behavior. The 186 locked on right away and stayed there. The 70-180 SD indicated finding the eye by the white box and when I pressed the BBF button the green box would occasionally oscillate between the eye and a larger box on the body. I switched to dynamic small and both lenses behaved similarly with the white box appearing on the eye followed by the green box when the BBF was pressed. To confirm, I swapped the respective lenses, and the behavior was exactly the same, indicating that the bodies were behaving identically, and the observed differences were attributable to the lenses. Next, I moved 1/2 the distance closer and dialed back the FL on the 70-180 so that the bird was the same size in the frame as it was at 180mm. Repeating this experiment, the Z8 70-180's SD instantly recognized the eye with the white box, but pressing the BBF resulted in a greater likelihood of oscillation of the green box. To me, this suggests that some part of the AF process is linked to FL? Unfortunately, again I was unable to test other AF modes/settings due to the friendly dog walker.

I'm not sure whether or not it's related, but this makes me think of some weird behavior I had found in terms of subject distance.

The Z8/9 records in the EXIF data for each image the focus distance. You will occasionally see people look this up when trying to help people making posts about poor AF performance - e.g., they will pull up the EXIF and say, "you were 50m from the subject when you took this photo, that is too far," or, "at 30m you should have done X and instead of Y" or whatever. It also displays this distance when manually focusing with some lenses.

Yet I found something odd when trying to do some sharpness comparisons between a few lenses: two lenses taking photos of the same object from the same distance reported totally different focal distances.

So I went ahead and did a very elaborate test of several lenses - the 85 f1.8, the 24-120, the 70-200, and the 180-600, along with a taking a few bonus measurements with other lenses like the 40 f2. I found a good, high contrast street sign and physically measured three different distances: 22m, 29.5m, and 36.8 meters from the sign and marked them off. Then I took photos at each distance with each lens, and with each lens I did both the extremes of the focal range as well as a few focal lengths which overlap between the lenses. So in other words, I took photos at 85mm, 120mm, 24mm, 200mm, 70mm, 180mm, and 600mm and with each lens did as many of those as it could do.

What I found was that the camera not only did not report accurate distance measurements: it didn't even appear to report any kind of actual measurement. Rather each lens reported the same single number regardless of the distance.

For instance, for the 24-120mm reported 17.24m for all three distances. It wasn't just wrong - it was like it wasn't even taking a measurement and just reported some random number. the 70-200 reported 38.52 meters for all three distances. Etc.

Well, this is partially true. What is actually true is that as the focal length changed, the lenses would sometimes swap to a new "random" number. For instance, the 70-200 reported 38.52m for all distances and focal lengths except that at a real physical distance of 36.8m it started reporting a distance of 19.31 when at 200mm.

I also recorded the absolute lens positions from the EXIF data. This is some internal value that the lenses use to indicate exactly where the focusing element was. These WERE different with each shot.

You can read my full report here: https://bcgforums.com/threads/norma...arently-meaningless-values.36275/#post-405964

So my takeaway here is that it does seem that there may be some kind of weird stuff going on in terms of the way the system is looking at focal lengths and subject distance when it comes to AF.
 
Tonight I decided to add my own experiments into the pool of trials using dynamic area AF, but with my Z6ii. I use a Sigma Art 135mm f/1.8 lens because of the poor lighting. I was at a Little League baseball game and decided to focus on the pitcher because he was closest to me and standing on a mound (hence not moving around like a fielder). I do not use BBF. Z6ii was set to AF-C + dynamic area AF mode. I placed the center focus point that is inside the boundary array of the 8 small red squares (ie, the typical dynamic array box) squarely on the pitcher. Then I slowly moved the camera to the left, thereby moving the center focus box slightly to the right and the focus remained locked on the pitcher even after the center box had moved off of the pitcher, as it is supposed to theoretically behave. As I continued to move the camera further left making the focus box further to the right, the pitcher remained in focus even while most of his body was outside the array of 8 squares! On some occasions, while the center focus box was off of the pitcher and again further to the right, the camera lost focus, but as soon as he moved his arm to begin pitching, he snapped back into focus because of the arm movement and then went out again. It was if a slight movement of his arm was enough to both cause focus to be either lost or recaptured while I kept the subject in the same position!! I repeated this experiment a couple of times with the subject off of the center focus box, but within the array of 8 squares and each time a slight movement of his arm would cause the focus to be lost and re-acquired. I don't know how a slight movement of his arm would affect the focus being lost or re-acquired while he was still standing in the same position within the array, but this algorithm must be one heck of a marvel of pixel detection.

Now when I repeated these trials by moving the camera to the right (making the pitcher move to the left) and as soon as I left the center focus square, the pitcher went oof. Moving the pitcher to be left had no margin for error and he went oof immediately. My impression is that the right side of the boundary array has more apparent latitude than the left side of the boundary array must have.

I also remembered that 3 years ago when I bought my first Z6 and the Z 70-200mm f/2.8 lens, I could never maintain focus once the subject left the center focus box, even though it remained within the boundary box of the 8 helper points. This occurred more often when I was zoomed out to the 200mm focal length. I complained loudly and vociferously enough to my NPS rep, he told me to send the lens back for a refund. Since then I no longer use a Z 70-200 lens, but instead use a 70-300mm AF-P lens in daylight and the Sigma Art 135mm f/1.8 at night.

Dynamic area AF mode continues to be a mystery in the mirrorless world.
 
Inspired a bit by Tomcat, I got the camera out just now and did a few quick tests.

First, I did find a noticeable difference in AF stickiness with panning right vs. left vs. up or down depending on the subject and background. I think part of this was simply a matter of the difference between the colors/contrast of the background and of the different parts of the subject, but even when trying to make this uniform there was still at times a slight difference.

Second, I think back to one of the comments above which talked about having better success with Dynamic area small than with medium. I was able to reproduce this a bit and my initial impressions in comparing the two a bit was that the AF system is paying more attention to absolute value distance from the center than to the actual helper points, as though the helper points are not actually really functioning as AF points the way they may have with DSLRs but are more like a visual indicator to the user as to how far from the center you can drift and maintain focus. I also think I noticed that the points represent more of an ultimate boundary than as the center of extra "helper points," because if I positioned the dots of the "helper points" close to the edge of the subject, it was much more likely to lose focus than if I kept them just a little bit further in.

Third, I created a video to demonstrate a few things:


In this video, you will see the following.

First, I set up an object and show that if I move the center point off of it but keep the helper points on it, the system will maintain focus for a little delay before refocusing on the background. I also show that size in the viewfinder does not appear to make much a difference by shooting in DX mode where it is relatively large in frame and FX mode where it is pretty small in the frame.

Second, I demonstrate that if I set a3 to 1 (quick) and perform the same motion, it will refocus almost immediately. Clearly, then, a3 does affect how the dynamic area mode works even when we are not talking about blocked shots. Some had raised the question of whether it applied to those helper points or only to blocked shots.

Third, I put a much closer background behind the subject. I tried to choose something similar in color to the subject. Here you can see that no matter how slowly I move the camera while keeping the helper points on the subject, and even with a3 set to 5 (delayed) the camera refocuses on the background almost immediately.

Fourth, I move the background back a little bit so that it is more out of focus when I am focused on the subject. You can see that this restores the "correct" behavior: the camera now stays focused on the subject for a little while when placing helper points only on the subject.

Fifth I select a background with slightly different coloring to the subject and try again. It does not improve matters as the camera still refocuses almost immediately.

Sixth I select a background with a very different coloring to the subject and you can see that the camera now performs better at keeping focus on the subject even when the background is close. It's not as good as when the background is a bit more distant, but it's better than the close but similarly colored background.

I think that some preliminary but probably relatively solid conclusions here are:

1) The AF system really does prefer darker, more contrasty backgrounds to the point that it doesn't do a good job following its theoretical logic when such a background presents itself. It will go for that background no matter what it's logic or algorithms say it is supposed to do.

2) The size of the object in the frame may not be a significant factor in AF performance here, but the degree to which a background is in focus - and thus more contrasty - makes a big difference, and this is perhaps a real reason that longer focal lengths may yield better results.

Although I couldn't get it to reproduce when filming the video, in the testing I did before recording I did find that with that blue/yellow box as the background that the camera would always jump right to the background when moving the center point over the brown/green part of the box, but would much more frequently stay on the subject when moving over the blue part of the box. I presume that this is because the blue part is lighter/less contrast than the darker, more contrasty browns and greens.

I also have suspicion /speculation based off of the sum total of the experimenting I have done: I wonder if the dynamic area modes on the Z cameras are actually just wide area modes with a different visual skin on them, because the more I experiment the more I would say that the dynamic area modes actually function an awful lot like wide area modes with subject detection turned off where the "helper points" are, as I suggested above, just indicators of how far outside of the box you can move and have the system maintain focus. There's been so much talk in the discussions of this topic about how they operate differently from the DSLR dynamic area modes and I wonder if that's why: because they're not really modes where a center point briefly hands off to other actual focus points but are just wide area modes made to look like dynamic area modes. Consider how they dropped classifying the different modes as D9, D25, etc. and just went to small/medium/large. Maybe that's because there aren't actually discrete AF points acting as discrete AF points, but they're just acting as part of the wide area arrays like with every other focus mode on the camera.
 
This time I had the 186 on one body and the 70-180 f/2.8 on the other. Both bodies were configured the same way and had both lenses dialed into 180mm f/6.3. Starting again in AA mode, SD, I observed some interesting behavior. The 186 locked on right away and stayed there. The 70-180 SD indicated finding the eye by the white box and when I pressed the BBF button the green box would occasionally oscillate between the eye and a larger box on the body. I switched to dynamic small and both lenses behaved similarly with the white box appearing on the eye followed by the green box when the BBF was pressed. To confirm, I swapped the respective lenses, and the behavior was exactly the same, indicating that the bodies were behaving identically, and the observed differences were attributable to the lenses. Next, I moved 1/2 the distance closer and dialed back the FL on the 70-180 so that the bird was the same size in the frame as it was at 180mm. Repeating this experiment, the Z8 70-180's SD instantly recognized the eye with the white box, but pressing the BBF resulted in a greater likelihood of oscillation of the green box. To me, this suggests that some part of the AF process is linked to FL?
Your feedback I consider very important as you are describing what works best for you and when.
Many things can affect the autofocus process - including in-lens VR.

The 70-180 uses IBIS for axis 1 & 2 for camera shake worth probably 3.5 stops at 180mm on this lens on a 5 axis IBIS body.
The 180–600 uses in lens VR with a CIPA rating 5.5 stops at 600mm.
Assuming IBIS/VR was on the different axis 1 and 2 performance could easily be enough to explain the different "jumping" effects you noted.
They could also perhaps help clarify at longer focal lengths in-lens VR can be more effective than IBIS.

It would not matter if I spent £100,000 on a piano - I could not play a good tune on it as I have no desire to spend time developing the necessary skill level.

When teaching groups of photographers about advanced photography I find the ones who volunteer when asked "my camera does not AF on X" learn quickest - as they generally appreciate their input (like learning to play a piano) can be more important than the camera specification when the aim is to improve photographic skill.
 
Your feedback I consider very important as you are describing what works best for you and when.
Many things can affect the autofocus process - including in-lens VR.

The 70-180 uses IBIS for axis 1 & 2 for camera shake worth probably 3.5 stops at 180mm on this lens on a 5 axis IBIS body.
The 180–600 uses in lens VR with a CIPA rating 5.5 stops at 600mm.
Assuming IBIS/VR was on the different axis 1 and 2 performance could easily be enough to explain the different "jumping" effects you noted.
They could also perhaps help clarify at longer focal lengths in-lens VR can be more effective than IBIS.

It would not matter if I spent £100,000 on a piano - I could not play a good tune on it as I have no desire to spend time developing the necessary skill level.

When teaching groups of photographers about advanced photography I find the ones who volunteer when asked "my camera does not AF on X" learn quickest - as they generally appreciate their input (like learning to play a piano) can be more important than the camera specification when the aim is to improve photographic skill.
Hi Len, the point regarding IBIS is interesting and may be involved in some of the AF disparities. FWIW, I use sport mode and what you are describing as "jumping" is a visual effect in the EVF of where the IBIS recenters the image which can be seen in standard mode. What I was describing is a completely different phenomenon involving disparities between SD/tracking and AF. If one half presses the shutter button to initiate SD/tracking, the small white box appears. That's what I was seeing, and it correctly detected and tracked the subject's eye. If I pushed the BBF, the green AF box would appear, typically over the white box though the AF box would oscillate between the eye and the body. Taking the finger off of the BBD, the white SD/tracking box reappeared instantly and always on the subject's eye. Now, could IBIS be a factor, certainly, though to me it appears that there is something amiss in the way SD/tracking algorithms interact with the af ones. There's something going on with the handshake but that is beyond my depth of understanding. I wish that I had an atmos so you could see the behavior rather than the description thereof.
 
Inspired a bit by Tomcat, I got the camera out just now and did a few quick tests.

First, I did find a noticeable difference in AF stickiness with panning right vs. left vs. up or down depending on the subject and background. I think part of this was simply a matter of the difference between the colors/contrast of the background and of the different parts of the subject, but even when trying to make this uniform there was still at times a slight difference.

Second, I think back to one of the comments above which talked about having better success with Dynamic area small than with medium. I was able to reproduce this a bit and my initial impressions in comparing the two a bit was that the AF system is paying more attention to absolute value distance from the center than to the actual helper points, as though the helper points are not actually really functioning as AF points the way they may have with DSLRs but are more like a visual indicator to the user as to how far from the center you can drift and maintain focus. I also think I noticed that the points represent more of an ultimate boundary than as the center of extra "helper points," because if I positioned the dots of the "helper points" close to the edge of the subject, it was much more likely to lose focus than if I kept them just a little bit further in.

Third, I created a video to demonstrate a few things:


In this video, you will see the following.

First, I set up an object and show that if I move the center point off of it but keep the helper points on it, the system will maintain focus for a little delay before refocusing on the background. I also show that size in the viewfinder does not appear to make much a difference by shooting in DX mode where it is relatively large in frame and FX mode where it is pretty small in the frame.

Second, I demonstrate that if I set a3 to 1 (quick) and perform the same motion, it will refocus almost immediately. Clearly, then, a3 does affect how the dynamic area mode works even when we are not talking about blocked shots. Some had raised the question of whether it applied to those helper points or only to blocked shots.

Third, I put a much closer background behind the subject. I tried to choose something similar in color to the subject. Here you can see that no matter how slowly I move the camera while keeping the helper points on the subject, and even with a3 set to 5 (delayed) the camera refocuses on the background almost immediately.

Fourth, I move the background back a little bit so that it is more out of focus when I am focused on the subject. You can see that this restores the "correct" behavior: the camera now stays focused on the subject for a little while when placing helper points only on the subject.

Fifth I select a background with slightly different coloring to the subject and try again. It does not improve matters as the camera still refocuses almost immediately.

Sixth I select a background with a very different coloring to the subject and you can see that the camera now performs better at keeping focus on the subject even when the background is close. It's not as good as when the background is a bit more distant, but it's better than the close but similarly colored background.

I think that some preliminary but probably relatively solid conclusions here are:

1) The AF system really does prefer darker, more contrasty backgrounds to the point that it doesn't do a good job following its theoretical logic when such a background presents itself. It will go for that background no matter what it's logic or algorithms say it is supposed to do.

2) The size of the object in the frame may not be a significant factor in AF performance here, but the degree to which a background is in focus - and thus more contrasty - makes a big difference, and this is perhaps a real reason that longer focal lengths may yield better results.

Although I couldn't get it to reproduce when filming the video, in the testing I did before recording I did find that with that blue/yellow box as the background that the camera would always jump right to the background when moving the center point over the brown/green part of the box, but would much more frequently stay on the subject when moving over the blue part of the box. I presume that this is because the blue part is lighter/less contrast than the darker, more contrasty browns and greens.

I also have suspicion /speculation based off of the sum total of the experimenting I have done: I wonder if the dynamic area modes on the Z cameras are actually just wide area modes with a different visual skin on them, because the more I experiment the more I would say that the dynamic area modes actually function an awful lot like wide area modes with subject detection turned off where the "helper points" are, as I suggested above, just indicators of how far outside of the box you can move and have the system maintain focus. There's been so much talk in the discussions of this topic about how they operate differently from the DSLR dynamic area modes and I wonder if that's why: because they're not really modes where a center point briefly hands off to other actual focus points but are just wide area modes made to look like dynamic area modes. Consider how they dropped classifying the different modes as D9, D25, etc. and just went to small/medium/large. Maybe that's because there aren't actually discrete AF points acting as discrete AF points, but they're just acting as part of the wide area arrays like with every other focus mode on the camera.
I appreciated the video and set up. There were a couple of observations worth noting. First, I think the a3 blocked shot response is operating as intended. What I observed was as you dialed up the blocked shot number, moved the camera with the resultant focus on the background, and then moved it back to the target, it lingered a bit longer on the background before reacquiring focus on the subject. To me, it didn't seem to affect "stickiness" much when you first moved it away. Second, and more importantly, it was clear that the central AF point seems to drive the AF priority and if a "helper" point remains on the subject while the central point moves to something with more contrast, it instantly refocuses on the higher contrast target. Also, this seems to confirm that dynamic does not have close subject priority (we knew that but it's good to confirm). You may want to repeat the experiment and rather than using spock ducky(?) as the main subject, print out a contrasty B&W sheet of paper/target and then place a larger/smaller less contrasty object behind/adjacent to it.

For as much as the Nikon AF system seems like a PC compared to the more Apple'esque, holistic Canon system, it doesn't offer a way to prioritize or bias the system towards subject tracking other than a3 (standard/erratic) that I am aware of. There's is a lot happening under the NIkon hood that we don't appreciate or understand, though I wish we had a conduit to the engineers to help make things right.
 
I appreciated the video and set up. There were a couple of observations worth noting. First, I think the a3 blocked shot response is operating as intended. What I observed was as you dialed up the blocked shot number, moved the camera with the resultant focus on the background, and then moved it back to the target, it lingered a bit longer on the background before reacquiring focus on the subject. To me, it didn't seem to affect "stickiness" much when you first moved it away. Second, and more importantly, it was clear that the central AF point seems to drive the AF priority and if a "helper" point remains on the subject while the central point moves to something with more contrast, it instantly refocuses on the higher contrast target. Also, this seems to confirm that dynamic does not have close subject priority (we knew that but it's good to confirm). You may want to repeat the experiment and rather than using spock ducky(?) as the main subject, print out a contrasty B&W sheet of paper/target and then place a larger/smaller less contrasty object behind/adjacent to it.

For as much as the Nikon AF system seems like a PC compared to the more Apple'esque, holistic Canon system, it doesn't offer a way to prioritize or bias the system towards subject tracking other than a3 (standard/erratic) that I am aware of. There's is a lot happening under the NIkon hood that we don't appreciate or understand, though I wish we had a conduit to the engineers to help make things right.

I hadn't thought to pay attention to how long it took to reacquire focus when coming off of the background, though I'm not so sure that what you observed is actually related to the blocked shot response because I released the AF-On every time it went out of focus on the back before I put it back on the duck and only pressed the button again after getting the point back on the duck, so each of those focuses on the subject was a fresh focus. I'll go back and look though.

The only thing I'd disagree with here is that there's no question that the AF was much stickier when coming off the subject. I'd encourage you to watch again because to me it's so different I can't imagine not noticing it. To try to provide an objective measure of this, I went ahead and counted the video frames from the first one where the center point was off the subject to the time it focused on the background each time. I only did this for the first half of the video or so because it was tedious, but it was very consistent in that with a3 set to the 1 (quickly) there were an average of 8.7 frames before refocusing (median was 10) and there were an average of 71.7 frames before refocusing (median of 71.5) with it set to 5(delayed). In other words, it hung onto the subject about 8 times longer with a3 set to 5, and this is pretty consistent with the way it felt when doing it.

I'm also very curious about the comparison of Canon's AF to Apple's "holistic" system. Can you explain that a bit more?
 
I hadn't thought to pay attention to how long it took to reacquire focus when coming off of the background, though I'm not so sure that what you observed is actually related to the blocked shot response because I released the AF-On every time it went out of focus on the back before I put it back on the duck and only pressed the button again after getting the point back on the duck, so each of those focuses on the subject was a fresh focus. I'll go back and look though.

The only thing I'd disagree with here is that there's no question that the AF was much stickier when coming off the subject. I'd encourage you to watch again because to me it's so different I can't imagine not noticing it. To try to provide an objective measure of this, I went ahead and counted the video frames from the first one where the center point was off the subject to the time it focused on the background each time. I only did this for the first half of the video or so because it was tedious, but it was very consistent in that with a3 set to the 1 (quickly) there were an average of 8.7 frames before refocusing (median was 10) and there were an average of 71.7 frames before refocusing (median of 71.5) with it set to 5(delayed). In other words, it hung onto the subject about 8 times longer with a3 set to 5, and this is pretty consistent with the way it felt when doing it.

I'm also very curious about the comparison of Canon's AF to Apple's "holistic" system. Can you explain that a bit more?
Perhaps the PC/Apple metaphor for Nikon/Canon isn't entirely accurate but like the former pair these cameras provide different user experiences. In the days of the Canon DSLR's, they offered modification of subject tracking via "cases" in their equivalent of AF-C (AI servo mode). Anyhow, one could tweak the various parameters, tracking sensitivity, AF point switching, acceleration/deceleration to better account for various subject movements. This "case" approach carried over in the first generation MILC's and some users found them helpful to address perceived AF difficulties. Personally, I found the AF tracking so good that only rarely were any tweaks needed. Moreover, the need for specialized AF areas seemed to be less of an issue with the Canons. They offered a full array of AF areas including, single point, zone, expanded zone, automatic selection, etc. and they just worked (unlike the difficulties you are experiencing). The automatic area along with tracking/eye detect was so good that using that and/or a single point were usually all that were needed. Of the first higher end releases, namely the R5, R3, and R7 (APS-C), only the R7 experienced what I felt were some challenges. It was a lesser expensive body which seemed to have some difficulty with tracking/eye detect in some situations. I believed this was computational (that is to say the camera didn't have enough computing power) because turning off the eye detect seemed to obviate the issues when they presented. Nonetheless, in spite of slow read speeds, a dismal buffer, and other deficiencies, I managed to capture some pretty impressive shots including KF's.

Since those earlier MILC's the Canon AF system has continued to evolve utilizing more AI based computational aspects than relying on user selection of AF zones. Apart from the marginally useful eye control autofocus, they have added ball tracking (useful for sports) an ability to "register" people which is useful for group settings, and importantly have simplified the case structure to "auto" or legacy "manual" (which allows the user to tweak if necessary). So, again in total, it is a different UI feel and function which I would liken to the difference between using an iPhone versus Android. Can and are people getting great results with the Nikon gear? Absolutely, and I have a portfolio of wonderful images shot with my Z8 and Nikkor lenses though find it exceedingly, and unnecessarily frustrating at times. Did I encounter situations when the Canon system failed to recognize the target or keep the AF on the desired point? Absolutely, though the issues you've been describing and those I've been encountering with the Nikon AF system were not evident when shooting similar subjects with Canon.
 
Perhaps the PC/Apple metaphor for Nikon/Canon isn't entirely accurate but like the former pair these cameras provide different user experiences. In the days of the Canon DSLR's, they offered modification of subject tracking via "cases" in their equivalent of AF-C (AI servo mode). Anyhow, one could tweak the various parameters, tracking sensitivity, AF point switching, acceleration/deceleration to better account for various subject movements. This "case" approach carried over in the first generation MILC's and some users found them helpful to address perceived AF difficulties. Personally, I found the AF tracking so good that only rarely were any tweaks needed. Moreover, the need for specialized AF areas seemed to be less of an issue with the Canons. They offered a full array of AF areas including, single point, zone, expanded zone, automatic selection, etc. and they just worked (unlike the difficulties you are experiencing). The automatic area along with tracking/eye detect was so good that using that and/or a single point were usually all that were needed. Of the first higher end releases, namely the R5, R3, and R7 (APS-C), only the R7 experienced what I felt were some challenges. It was a lesser expensive body which seemed to have some difficulty with tracking/eye detect in some situations. I believed this was computational (that is to say the camera didn't have enough computing power) because turning off the eye detect seemed to obviate the issues when they presented. Nonetheless, in spite of slow read speeds, a dismal buffer, and other deficiencies, I managed to capture some pretty impressive shots including KF's.

Since those earlier MILC's the Canon AF system has continued to evolve utilizing more AI based computational aspects than relying on user selection of AF zones. Apart from the marginally useful eye control autofocus, they have added ball tracking (useful for sports) an ability to "register" people which is useful for group settings, and importantly have simplified the case structure to "auto" or legacy "manual" (which allows the user to tweak if necessary). So, again in total, it is a different UI feel and function which I would liken to the difference between using an iPhone versus Android. Can and are people getting great results with the Nikon gear? Absolutely, and I have a portfolio of wonderful images shot with my Z8 and Nikkor lenses though find it exceedingly, and unnecessarily frustrating at times. Did I encounter situations when the Canon system failed to recognize the target or keep the AF on the desired point? Absolutely, though the issues you've been describing and those I've been encountering with the Nikon AF system were not evident when shooting similar subjects with Canon.

When I first got back into photography after around 15-20 years away (the last camera I had had was a Canon film SLR) I had a Canon M50 for about a year. There were a few things I didn't like about it and I eventually got a Nikon DSLR to use as my primary camera instead because I had really liked some friends' Nikons I had used during that 20 years.

While I did have my reasons for moving on from it, when I look back at that M50, which was obviously one of their earliest, lowest tiered mirrorless models, the AF on it was just as you described: it just worked and worked very well. The most striking part of it all might be that I got that camera when I really, having been out of it for so long, had no idea how to actually use a camera and was using a very early, rudimentary mirrorless AF system and yet in a lot of ways I still had better success than now with much, much more experience I have with a much more advanced flagship system. In fairness, I am shooting much more challenging subject matter now than I was then, but even when I do things more similar to what I did with the M50 like just trying to take photos of my kids doing something in the yard it is striking how much easier it was.

I'm trying to figure out what to do now. I really would consider going for a Canon if I can work it out, BUT their lens selection leaves a lot to be desired for me and unfortunately when I look at the cameras they have they seem to tend to be lower MP, which I am not as much a fan of.
 
While I haven't been shooting in dynamic mode i decided to try and shoot more with the Z9 and compare the AF with the Z8. Now there is nothing scientific about my methods it is about my experiences in a couple of my preferred genre's.

First I compared pictures from the Z9 and Z8 with the 600pf shooting spray planes. Secondly i compared results from shooting D-1 football. In both cases my keeper rates were higher and generally the photos were sharper overall with the Z9. This was in my preferred focus settings with both Wide S and Wide L and 3 D. Also shots in football with Single point and handoffs from Wide S to 3D using the 100-400 on both bodies.

When I first got the Z8, I used it mainly as a backup to my Z9. Over a couple of years I have shot my Z8 more. Everyday use I carried the Z8 more and more with the 600pf and 100-400. I took many good shots with the camera and will continue to use it as i have but it is just slightly under the Z9. Just guestimating but i think the Z9 had a 5 to 10% keeper rate. I added a couple of example below.

Back to the original main topic. SCoombs, I am not doubting your conclusions. I have not had a need to use dynamic area mode since the first or second major firmware update of the Z9. Before that I did use dynamic area for basketball. During my 10 years of shooting D-1 sports, i have gotten away from shooting anything longer than 60 yards in most cases in football. I still shoot long shots in the outfield in baseball and it is very rare that I will use one of those shots. However, those shots are almost the perfect setting for long shots as you have a solid background with the wall and generally a single player with a contrasting uniform to the wall.

Back to football. We are only allowed from the 20 yard lines outward to the endzone and in the back of the endzone. If I am sitting in the endzone, I don't try to shoot past the 50 yard line.

Photos 1 and 3 taken with Z8 with 600pf in WideL airplane detection.
Photos 2 and 4 taken with Z9 with 600pf in WideL airplane detection.

Photos 5 and 6 were both taken the Z8 in Wide S human detection.


DSC_8474-Enhanced-SR.jpg
You can only see EXIF info for this image if you are logged in.
Z9W_1045.jpg
You can only see EXIF info for this image if you are logged in.
DSC_4695.jpg
You can only see EXIF info for this image if you are logged in.
Z9W_1377.jpg
You can only see EXIF info for this image if you are logged in.

DSC_9701-Enhanced-NR.jpg
You can only see EXIF info for this image if you are logged in.
DSC_0742.jpg
You can only see EXIF info for this image if you are logged in.
 
I have been following this thread, but I have to admit that I’ve lost track of all the options tried. I shoot wildlife and have been blown away by my Z9’s capabilities, including for BiF. I also shoot my two year old daughter running/biking around with AA and SD, which works perfectly for her, but that’s obviously not similar to shooting sports.

I remembered a recent thread on FredMiranda where people discussed their approach to shooting sports. It looks unanimous that folks use Wide Area Small with SD (or a custom box that is about the size of the player with SD) and then hand off to 3D. Is that something SCoombs has tried?
 
Back
Top